@rysiek Spam has nothing to do with the size of the instance. They might as well run distributed attacks.
@pkreissel spam is a moderation issue and as such has everything to do with instance size.
If I am a spammer, I *know* that if I set up an account on huge, popular instance, it will be easier for me to spam a lot of people fast. I also know that admins of other instances will have a tough nut to crack with the question: defederate or not?
If I go with a smaller instance, the admin might notice me sooner, and if not, other instances defederate sooner.
@rysiek@mstdn.social @pkreissel@chaos.social This is exactly why a spam attack from a 10k MAU sized server would be so much worse. Those instances dont have 24/7 mod support, so its easy to hit the attack when mods are not available. And other admins will indeed choose defederate way quicker instead of mute, which will irreversibly sever all follow connections.
Im really not sure why you're so excited about the possibility of earlier/easier defederation.
@rysiek@mstdn.social @pkreissel@chaos.social like, to make this very practical and concrete: mstdn.social has 1 admin and 2 mods, all publicly known, and all very easy to find out which overlapping timezones they operate in. It also has open signups, so you can execute the exact same crypto spam attack as is happening now.
Having other admins go like 'oh this is a smaller server, so we can defederate, its fine', would suck pretty bad.
I think there is a lot of criticism to be had that there is not better tooling to deal with spam (see Matt Blaze about DM's for example). I just dont see how criticizing m.s. would actually solve the spam issue as it would just move to the next biggest open signup server.
@laurenshof @pkreissel and yet that crypto spam attack happened from mastodon.social, three times over ten days.
> I just dont see how criticizing m.s. would actually solve the spam issue as it would just move to the next biggest open signup server.
Solving a problem one step at a time is a legitimate way of solving a problem.
@rysiek @laurenshof you cannot solve spam attacks unless you do some sort of control at the entrance. That’s the only way to stop it. Instance size has nothing to do with it. That Mastodon.Social was targeted first is probably due to it being well known.
@pkreissel @laurenshof the size matters as well. If I am a spammer, why would I attack a tiny instance that is likely not well connected and would be quickly silenced/defederated from if admins fail to act fast, if I can attack a huge instance that is extremely well connected and much more unlikely to be defederated from or silenced by admins of other instaces?..
@rysiek @laurenshof see my post here, this discussion is far away from reality, and I will leave it now: https://chaos.social/@pkreissel/110368791602918474
@rysiek@mstdn.social @pkreissel@chaos.social I'm confused with what you mean by a tiny instance not being well connected. As soon if you have an instance that follows 1 person on each of the top 10 instances you are extremely well connected, especially from the perspective of a spammer.
It also ignores game theory: if the perspective is that instances should be defederated pretty quickly if they dont respond within an hour, it becomes very unattractive to sign up for a smaller instances that is less likely to have perfect 24/7 mod coverage. Instead people opt for the servers with the largest mod team, which is often the largest servers.
Also, to reiterate: I dont want to excuse/apologize for Eugen here. I just think that the critique is misdirected by talking about m.s. size (which has certainly its own issues), when instead I think there should be more criticism that better mod tools for DM spam have not been build yet.
@laurenshof @pkreissel I think we agree more than we don't here.
@rysiek@mstdn.social @pkreissel@chaos.social think so too. Like, I'm legitimately worried about the spam attacks, but reading during the last spam attack that someone accidentally blocked instead of muted m.s. also freaked me out. Building decentralized shit is just really hard :(
@laurenshof @rysiek@mstdn.social @pkreissel@chaos.social Could it be that the new easier sign up procedure is more readily exploitable by bots?
@emma @rysiek@mstdn.social @pkreissel@chaos.social no, that only affects the apps and joinmastodon.org. Spammers go directly to the signup page (or use the API)
@laurenshof @rysiek @pkreissel The smaller instances generally have manual approval for setup (something that is not manageable on an instance the size of m.s), and therefore the mass spam accounts never get created in the first place. There's nothing to moderate.
@joepie91 @laurenshof @rysiek how do you „manually“ see which accounts are spam and which are not? Believe me, with very little effort you can generate millions of spam accounts, that are indistinguishable from the real thing.
@pkreissel @laurenshof @rysiek The point of manual approval is not to identify 100% of the spam accounts upfront. It's to make it non-viable to hit-and-run a thousand disposable spam accounts.
Writing credible applications for an account costs time. That doesn't make sense in spammer economics, it will significantly reduce the amount of attempts that are even made, and therefore the amount of moderation load.
The mass-spammer problem disappears entirely because of aforementioned economics.
@pkreissel @laurenshof @rysiek Or to put it differently: you cannot automate apply-for-account applications without it *looking* automated.
@pkreissel @laurenshof @rysiek Yes, and those all follow the same format/tone, and are therefore easy to identify in bulk.
@joepie91 @laurenshof @rysiek I cannot. Good luck.
@pkreissel @laurenshof @rysiek Please consider that a large amount of instances have already been successfully doing this for years in exactly the way described.
@joepie91 @laurenshof @rysiek 1. there was no generative AI then. 2. the more people are on mastodon in total the more interesting this will be for bad actors. Manual work won’t cut it.
@pkreissel @laurenshof @rysiek Do you run an instance?