As generative #AI tools get better at tricking us into believing what they generate is true, I think we need to recognize a new Internet Law:
> Without clearly marking them as such, it is impossible to share AI-generated works without *someone* treating them as real.
Basically, Poe's Law applied to AI-generated stuff.
This will become increasingly important to acknowledge as our communication gets flooded with AI hallucinations.
1/
Alt-right *inverted and weaponized* Poe's law: "I'm not a racist, it's just sarcasm!"
They *relied* on the fact that Poe's Law gave them cover. "You're mistaking my sarcasm for the real article!"
It took us a decade or so to wise-up and recognize they need to be denied this cover by not letting them hide behind the "sarcasm defense" after being called out.
We need to wise-up to the need of not letting people hide behind "of course it's AI generated" in the context of disinformation.
2/
AI-generated images are becoming extremely convincing. It's becoming very easy (and cheap!) for bad actors to peddle them as part of disinformation efforts.
We cannot allow them to hide behind "oh dear I thought it's obvious!" afterwards.
I believe we need to start assuming bad faith when AI generated works are posted without being clearly marked as such, just as we assume bad faith when alt-right dog whistles are posted without them being clearly marked as sarcasm.
3/
I guess I should write about this somewhere. If anyone has suggestions where might make sense to pitch this, I am all ears.
@rysiek would you mind (or of you have an extended version of it) if we publish this on @RightsChain website (non exclusively)? doesn't have a big reach, but it's within our values and fits pretty well.
@en3py I notice that @RightsChain mentions blockchain as a technology that you use:
https://www.rightschain.net/en/technologies.php
That is a huge red flag to me.
I would much prefer *not* being associated with any project that touts any kind of blockchain as a solution to any social problem, especially such a complex one as author's rights in the digital age.
@rysiek @RightsChain we do, but it's exclusively used as a repository for digital signatures. We have no use of fintech tools and coins of any sorts (and it's not in the roadmap). I understand your concerns, I won't be pushing any further, but if you want to get more into the topic, I'm open to discussion :)
@en3py thank you, I appreciate that. I might have a deeper look at some point, and I might have some questions.
To be quite honest, it would be refreshing to see an actually useful tool built using blockchain, and without the all-too-prevalent in that community pump-and-dump approach.
@rysiek @RightsChain if you have any question, even "tough" or uncomfortable: they may help us improving or checking if we are on the right track :-)
@rysiek
Don’t disagree with any of your points, but I think it’s worth teasing the problem apart a little more to ensure we’re solving the right bits.
Material that’s inauthentic (by accident) or deliberately false and misleading (by design) can, and has been produced forever by humans, and it’s been distributed by rumour, pamphlet, broadsheet, radio, tv and internet.
The problem of propaganda isn’t new.
Likewise, the various “plausible deniability” defenses of deliberate bad actors haven’t changed.
With AI generated text that’s potentially indistinguishable from human-generated text, in some respects the problem hasn’t changed at all, inasumuch as the AI text might be entirely accurate (is Google search AI output?) or inadvertently wrong, or deliberately misleading. The real difference, as with most computing technology, is speed and scale. Now this stuff can be pumped out by the terabyte.
Automatically assuming AI output is wrong is in some ways no less dangerous than assuming all human output is right…
I don’t disagree about unmasking bad actors, but that’s to some extent orthogonal to the technology they use for the act.
(and AI isn’t intelligent, it’s stochastic parrots, but that’s another deal)