mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

16K
active users

I know I haven't been able to work on b4 and other tooling as much as I was hoping, but between the Equinix exodus, having to continuously mitigate against LLM bot DDoS'ing our infra, and just general geopolitical sh*t that lives rent-free in my head... it's been difficult. But I have high hopes and lots of good ideas -- that's got to count for something, right?
FYI, Drew isn't making it up in this article. At any given time, if you check what I'm doing, chances are I'm trying to figure out ways to deal with bots.

https://drewdevault.com/2025/03/17/2025-03-17-Stop-externalizing-your-costs-on-me.html
drewdevault.comPlease stop externalizing your costs directly into my face

@monsieuricon But our investors and shareholders! You can't block us, think about our profit.

On a serious note: Is it possible to identify who the offending companies are? And somehow take the war to their turf?

@rails There is not. There is, in fact, no reliable way to identify legitimate requests from bot traffic if you're only looking at logs or packets. The only way to reliably tell is by getting yourself into the page rendering client. E.g. this is what happens when you get CloudFlare's "prove you're not a bot" screen -- they use javascript to collect information about your browser and to watch the pointer behaviour to figure out if you're a bot or not (plus, massive amounts of data they have internally on your IP address).
Esgariot

@monsieuricon @rails would some combination of proof-of-work challenge like Xe’s Anubis, and, for other protocols (or for legitimate clients not using javascript), requiring authentication, be an acceptable tradeoff? I know it’s making the normal users pay the price for the bot traffic, but maybe it’s until it dies down?
(I have no technical expertise in the matter, so it may not make any sense)

@esgariot @rails Yes, it would work, but would it be acceptable trade-off? That's not clear. Right now, I'm leaning towards setting up separate, authentication-required duplicates for some services that I can give to maintainers and developers, but that, again, is capitulating and admitting that the open web has failed.

@monsieuricon @rails @esgariot At some point it feels like it will become admin-time effective to switch from IP blocklist to allowlist. Though, yes, it still gives up on the open web. :(