First I drink the coffee, then I do the things.
Cybersecurity specialist. Perpetual blue team botherer and a glorified network janitor. SecurityFest Crew (https://securityfest.com/)
Trying to leave things better than I found them.
Slow regard of silent things.
#infosec #security #cybersecurity #dfir #coffee #climate #sustainability #solarpunk
About Me: https://0xtero.hanninen.eu/
Mastodon: https://infosec.exchange/@0xtero
I find it interesting that Meta Platforms, Inc., a company known for harvesting user data, is blocking some servers from fetching its public posts. They decided to implement a feature Mastodon calls Authorized fetch.
This was always going to happen. They will block agressively, because they can’t have their precious advertising money mixed with CSAM, nazis and other illegal content. And the fedi is full of that.
I hope not
I’ve been using Debian since 1.3. Haven’t really ever needed anything else.
I did “experiment” a bit when the decision to go with systemd was taken, but in the end, most distros went with it and it really isn’t that big deal for me.
So it’s just Debian. I need a computer that works.
Gates is probably just as bad and evil as the global 0.1%:er billionaire cabal members come, but that site gave me a crackpot conspiracy brainrot.
It’s wild that a site with hundreds of millions of users, didn’t invest into multiple-account deletion tools.
True start-up mentality, that one.
Just shows how our “critical” social media is really just some hasty tape and bubblegum behind the scenes to keep the front from falling apart.
As is the case normally with these “exodus” things, most people went back to Reddit after the first month here.
I guess majority on fedi are dumbasses in that case ¯\_(ツ)_/¯
Mastodon is pretty fucked up anyway because everyone is on mastodon.social.
Because the people signed the pact did it long time ago, before any details about Threads federation was known. It was a typical fedi kneejerk reaction.
It’s a silly hashtag för instances that are in a “pact” to block Threads
Somehow I don’t think many instance admins have resources or knowhow to drive legal processes against Meta?
And while a disclaimer on the instance page might have some effect, the Federation protocol makes it hard to avoid getting a copy of the said content in your cache.
How do we accomplish that?
Why spend the money up front? That’s just bad business.
Yeah agreed. They’re building a multi-million dollar social network - why spend all that money up front when they could have just installed small anonymous Pleroma on Raspberry Pi for under 100 bucks if they’d wanted to mine our data.
I don’t think fedi is their “target”.
I bet he does. You can block/mute influencers pretty easily and you can block the whole domain if you so wish.
He’s talking about some kind of nefarious ad injection into ActivityPub objects as part of server to server activities.
I think he’s talking about people on his own instance.
He’s Fosstodon admin, so pretty sure he knows how federation works.
doesn’t mean we have to hand it to them on a silverplatter and allow them to scrape it legally
They could have just set up a simple Pleroma on Raspberry Pi and it would have been just as “legal” as any other instance. You’d need to turn on AUTHORIZED_FETCH and set up authentication on the Mastodon API, otherwise everything is public and unauthenticated (even if the instance is suspended/defederated).
But if enough instances say no, that means they are not welcome. Democracy and all
mastodon.social has already said yes. So have all the other big instances. Most of them have said “we’ll wait and see”. So democracy served I guess
And the last point is the dumbest: Threads will just include a revenue sharing model like Youtube does
Yeah, maybe. Who knows. I’ll deal with it when it happens rather than knee-jerk years in advance. Threads has a long way to go, it’s missing a lot of features to put it on par with their other commercial competitors, so I think they’re going to be busy doing other things.
Simo Häyhä has entered the chat.
Yeah, that’s pretty much my take as well.
All the “but muh datas” pearl clutching is just annoying and frankly, ridiculous. If they wanted to mine us, they already would have. They’re probably doing it as we speak. They didn’t have to create a multi-million social network for it. A raspberry pi on someones desk would have sufficed. Fedi doesn’t have any (/very much) privacy.
They’re doing this to escape the wrath of EU privacy watchdogs. They were already fined for $1.3bn and more is coming. Running their Twitter killer on interoperable protocol is nice, because it’s free and they get to point at W3C and say they’re LIKE TOTALLY supporting data portability. Why would they “extend and extinguish” that? It’s their alibi.
I don’t like Meta. It’s a shit company ran by shit people. I hope they burn in hell.
But I can’t really get my panties in a twist about threads.net existing.
I’ll get angry if they somehow figure out to push ads to my face.
But for now. Maybe I’ll block it. Maybe I won’t. We’ll see.
Local mail client (Thunderbid) -> IMAP/POP -> sync.
Once done, move to a local folder and delete from Gmail.
You can just backup the Thunderbird profile, if you want to keep the mails safe
It’s also a matter of scale. FB has 3 billion users and it’s all centralized. They are able to police that. Their Trust and Safety team is large (which has its own problems, because they outsource that - but that’s another story). The fedi is somewhere around 11M (according to fedidb.org).
The federated model doesn’t really “remove” anything, it just segregates the network to “moderated, good instances” and “others”.
I don’t think most fedi admins are actually following the law by reporting CSAM to the police (because that kind of thing requires a lot resources), they just remove it from their servers and defederate. Bottom line is that the protocols and tools built to combat CSAM don’t work too well in the context of federated networks - we need new tools and new reporting protocols.
Reading the Stanford Internet Observatory report on fedi CSAM gives a pretty good picture of the current situation, it is fairly fresh:
https://cyber.fsi.stanford.edu/io/news/addressing-child-exploitation-federated-social-media