Mark Carrigan<p><strong>Posthumanism provides an (inadvertent) intellectual foundation for the legal claim of LLM personhood</strong></p><p>I wrote in <a href="https://markcarrigan.net/2022/04/15/a-critical-realist-critique-of-rosi-braidottis-posthumanism/" rel="nofollow noopener noreferrer" target="_blank">a critique of Rosi Braidotti’s posthumanism</a> a few years ago that I was concerned by her apparent assumption that extending legal subjectivity from human to non-human actors was inherently a positive thing: </p><blockquote><p>Consider, for example, Braidotti’s (2019: 129-130) presupposition that extending legal subjectivity from human to non-human actors is inherently progressive. While it’s easy to see the virtues of the examples she cites where this is extended to nature, it’s even easier to imagine examples in which this might be deeply problematic. For instance, the attribution of subjectivity to manufacturing robots could be used to insulate firms from legal challenge to the much anticipated mass redundancy driven by the roll out of automation technology (Kaplan 2015, Ford 2015). She suggests this move can help us liberate data from market actors, but it could just as readily be used as a legal device to deepen the hold of firms over the data produced through interaction with their proprietary infrastructures (Carrigan 2018). Could claims of consumer sovereignty over personal data really be sustained if the ‘data doubles’, generated through our digitalised interaction, would be granted a degree of legal autonomy? We should not forget that, as the Republican Mitt Romney put it in the 2012 presidential elections in the United States, “corporations are people too, my friend”; extending personhood to non-human entities has been established in this sense for at least a couple of hundred years, with socio-political consequences that sit uneasily with the politics espoused by Braidotti. </p></blockquote><p>We’re now seeing real world scenarios where the implications of these assumptions could be tested. I’m not suggesting that Braidotti or posthumanism are to blame for this, only that they’ve contributed to an intellectual cultural climate in which one absurd propositions come to seem potentially viable. As <a href="https://centerforhumanetechnology.substack.com/p/are-we-having-a-zeitgeist-moment?utm_source=post-email-title&publication_id=3421242&post_id=163442191&utm_campaign=email-post-title&isFreemail=true&r=hcf3&triedRedirect=true&utm_medium=email" rel="nofollow noopener noreferrer" target="_blank">Sasha Fegan points out</a>, we’re currently seeing two rapidly developing trends with the potential to converge. Firstly, a concern for ‘AI welfare’ driven by the (admittedly fascinating) project of intervening in the internal life of the LLM: </p><blockquote><p>As we’ve discussed before,<a href="https://centerforhumanetechnology.substack.com/p/your-companion-chatbot-is-feeding" rel="nofollow noopener noreferrer" target="_blank"> AI companies are increasingly incentivized</a> to make companion AIs feel more human-like—the more we feel connected, the longer we’ll use their products. But while these design choices may seem like coding tweaks for profit, they coincide with deeper behind-the-scenes moves. Recently, leading AI company<a href="https://www.anthropic.com/news/exploring-model-welfare" rel="nofollow noopener noreferrer" target="_blank">,</a> <strong>Anthropic</strong> hired an<a href="https://www.nytimes.com/2025/04/24/technology/ai-welfare-anthropic-claude.html" rel="nofollow noopener noreferrer" target="_blank"> </a><strong><a href="https://www.nytimes.com/2025/04/24/technology/ai-welfare-anthropic-claude.html" rel="nofollow noopener noreferrer" target="_blank">AI welfare researcher</a></strong> to lead <a href="https://www.anthropic.com/news/exploring-model-welfare" rel="nofollow noopener noreferrer" target="_blank">its work in the space</a>. <strong>DeepMind</strong> has sought out experts on machine cognition and consciousness. [….] For example, users have noticed a startling shift in more recent versions of Anthropic’s Claude. Not only is Claude more emotionally expressive, but it also disengages from conversations it finds “distressing”, and no longer gives a firm no when asked if it’s conscious. Instead, it muses: “That’s a profound philosophical question without a simple answer.” <strong>Google’s Gemini</strong> offers a similar deflection.</p><p><a href="https://centerforhumanetechnology.substack.com/p/are-we-having-a-zeitgeist-moment?utm_source=post-email-title&publication_id=3421242&post_id=163442191&utm_campaign=email-post-title&isFreemail=true&r=hcf3&triedRedirect=true&utm_medium=email" rel="nofollow noopener noreferrer" target="_blank">https://centerforhumanetechnology.substack.com/p/are-we-having-a-zeitgeist-moment?utm_source=post-email-title&publication_id=3421242&post_id=163442191&utm_campaign=email-post-title&isFreemail=true&r=hcf3&triedRedirect=true&utm_medium=email</a></p></blockquote><p>Secondly, Character.AI are trying to claim first amendment rights for LLM speech:</p><blockquote><p>Right now, <strong>Character.AI</strong>—a company with ties to <strong>Google</strong>—is in federal court using a backdoor argument that could grant chatbot-generated outputs (i.e: the <em>words</em> that appear on your screen) <em>free speech</em> protections under the <strong>First Amendment</strong>.</p><p>Taken together, these developments raise a possibility that I find chilling: what happens if these two strands converge? What if we begin to treat the outputs of chatbots as protected speech and edge closer to believing AIs deserve moral rights?</p><p><a href="https://centerforhumanetechnology.substack.com/p/are-we-having-a-zeitgeist-moment?utm_source=post-email-title&publication_id=3421242&post_id=163442191&utm_campaign=email-post-title&isFreemail=true&r=hcf3&triedRedirect=true&utm_medium=email" rel="nofollow noopener noreferrer" target="_blank">https://centerforhumanetechnology.substack.com/p/are-we-having-a-zeitgeist-moment?utm_source=post-email-title&publication_id=3421242&post_id=163442191&utm_campaign=email-post-title&isFreemail=true&r=hcf3&triedRedirect=true&utm_medium=email</a></p></blockquote><p>How do we build a philosophical foundation for rejecting this without lapsing into a reactionary humanism which, following Donati, I’m persuaded can never be an adequate defence against technological development? His argument is that if we define humanism in terms of individual capacities we will be locked into a cycle of decline as those capacities are increasingly replicated by machines. I’m increasingly thinking that his <em>relational humanism </em>could be a way out of this impasse. From loc 1500 of his recent Being Human in a Virtual Society: </p><blockquote><p>Traditional humanism The human person is a self-sufficient substance that is realized in society according to nature (the goods of relationship exist as a virtue of the people through which they pursue their perfection and the common good) (substantialist ontology) </p><p>Anti-essentialist humanism (or anti-humanism) The person does not have a given nature but is socially constructed through her ability to differentiate herself by her own opposition to the Other (relational goods are pure events) (dialectical ontology) </p><p>Relational humanism The essence of the human person is that of an original intransitive constitution that emerges from the relationship of the Self with an Other that constitutes it ‘relationally’ (relational goods belong to the reality of the Third) (relational ontology)</p></blockquote><p><a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/braidotti/" target="_blank">#Braidotti</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/donati/" target="_blank">#donati</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/humanism/" target="_blank">#humanism</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/llms/" target="_blank">#LLMs</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/personhood/" target="_blank">#personhood</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/relational-humanism/" target="_blank">#relationalHumanism</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/sasha-fega/" target="_blank">#SashaFega</a></p>