@igisho #llm #accelerationism #ai2027 But why is the index finger raised against the background of the colors of the French Foreign Legion? Is this a hint at something?
@igisho #llm #accelerationism #ai2027 But why is the index finger raised against the background of the colors of the French Foreign Legion? Is this a hint at something?
Here is my last take on #ai translated to English.
bizarrely and with a motive that can almost singularly be assessed as #accelerationism.
the sub-textual motive of the entire Project 2025, imho, has been the baiting of a civil conflict where a 'victimized' white minority can plausibly take control.
#whitenationalism
"As I say in the book, Andreessen’s manifesto runs almost entirely on vibes, not logic. I think someone may have told him about the futurist manifesto at some point, and he just sort of liked the general vibe, which is why he paraphrases a part of it. Maybe he learned something about Marinetti and forgot it. Maybe he didn’t care.
I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear. For many of these billionaires, the vibes of fascism, authoritarianism, and colonialism are attractive because they’re fundamentally about creating a fantasy of control."
https://www.technologyreview.com/2025/06/13/1118198/agi-ai-superintelligence-billionaires/
#AI #AGI #SuperIntelligence #Billionaires #SiliconValley #EffectiveAltruism #Rationalism #LongTermism #Extropianism #Accelerationism #Futurism #Singularitarianism #Transhumanism
Ach. Die "#Drecksarbeit", würde Merz sagen. Was wir hier sehen ist #accelerationism at work. In den USA werden alle die Jubeln, die das "System" schneller sterben sehen wollen, ob rechts, links, rassisten, christofaschisten oder marktradikal. Spannende Zeiten, leider.
https://youtu.be/LcEQoJMqSpQ
r/singularity is an awesome social media forum :)
"The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning “a bunch of schizoposters” who believe “they've made some sort of incredible discovery or created a god or become a god,” highlighting a new type of chatbot-fueled delusion that started getting attention in early May.
“LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities,” one of the moderators of r/accelerate, wrote in an announcement. “There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment.”
The moderator said that it has banned “over 100” people for this reason already, and that they’ve seen an “uptick” in this type of user this month.
The moderator explains that r/accelerate “was formed to basically be r/singularity without the decels.” r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. “Decels” is short for the pejorative “decelerationists,” who pro-AI people think are needlessly slowing down or sabotaging AI’s development and the inevitable march towards AI utopia. r/accelerate’s Reddit page claims that it’s a “pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents.”"
https://www.404media.co/pro-ai-subreddit-bans-uptick-of-users-who-suffer-from-ai-delusions/
politics, USPol, US Politics, rant
continuism: a philosophy for humanity in the technocratic onslaught | (an interesting read about the contrast between the so-called "accelerationism" and the "Continuism")
A speculative genealogy of accelerationist perspectives
Increasingly I think it makes sense to distinguish between different accelerationist positions. I rarely use the term to describe my own politics any more, both because I don’t want to risk association with far-right positions and because the potential vehicle for a left-accelerationist politics has been smashed into pieces. But my instincts remain left-accelerationist, in the sense of being inclined to ask how emerging technologies could be steered towards solidaristic and socially beneficial goals rather than being driven by the market. It means insisting we consider the technology analytically in ways which distinguish between emergent capacities and how those capacities are being organised at present by commercial imperatives. It means insisting we dive into the problems created by emerging technologies, going through them rather than seeking to go around them, rather than imagining we could hold them back by force of our critique.
In the mid 2010s this felt like quite an optimistic way to see the world but now it feels like a weirdly gloomy way to see the world, because the sense of collective agency underwriting such a future-orientation now seems largely, if not entirely, absent. It’s interesting therefore to see someone like Reid Hoffman, rare liberal member of the billionaire paypal mafia, offer a perspective which has some commonalities with this but could rather be described as a liberal humanist accelerationism. From pg 1-3 of the book Superagency, he’s written with Greg Beato:
We form groups of all kinds, at all levels, to amplify our efforts, often deploying our collective power against other teams, other companies, other countries. Even within our own groups of like-minded allies, competition emerges, because of variations in values and goals. And each group and subgroup is generally adept at rationalizing self-interest in the name of the greater good. Coordinating at a group level to ban, constrain, or even just contain a new technology is hard. Doing so at a state or national level is even harder. Coordinating globally is like herding cats—if cats were armed, tribal, and had different languages, different gods, and dreams for the future that went beyond their next meal. Meanwhile, the more powerful the technology, the harder the coordination problem, and that means you’ll never get the future you want simply by prohibiting the future you don’t want. Refusing to actively shape the future never works, and that’s especially true now that the other side of the world is only just a few clicks away. Other actors have other futures in mind. What should we do? Fundamentally, the surest way to prevent a bad future is to steer toward a better one that, by its existence, makes significantly worse outcomes harder to achieve.
The difference here is that he’s envisioning society as made up with more or less self-realised individuals, in a world in which power and vested interests is (primarily, at least) a matter of how those individuals interact rather than an enduring structural context to their interaction. But with this huge caveat, a lot of the assumptions and instincts here are similar to my own. This could in turn be contrasted to Tony Blair’s post-liberal accelerationism concerned with the role of the state under these conditions:
There’s a similar line of thought in this review by Nathan Pinkoski of Blair’s book on leadership. He describes Blair’s program as a “kind of post-liberal progressive rightism that promises to co-opt the progressive left while crushing the populist right”. Underlying this project is “a commitment to unlimited, unrestrained technological progress, and a belief that this will bring about a better world”.
And we might in turn distinguish this from the libertarian accelerationism of Marc Andreessen who seems to see little to no legitimate role ofr the state.
There’s a risk in distinguishing between these positions that we take them as doctrines, whereas I think they can better be understand as articulations of underlying instincts and orientations. How technology feels to people and how they feel about technology. Their inclination when presented with sociotechnical change etc.
A really good, informative thread on the neo-nazi accelerationist network Maniac Murder Cult (which goes by various acronyms, most notably MKY, MKU, and MMC):
https://bsky.app/profile/romanhoefner.bsky.social/post/3lortnmyvg22u
Open Statement from Within the U.S. #Environmental Community: Against #Collapse Ideology, Despair, and #Disinformation
2/2
@flowinguphill.bsky.social
@jksteinberger.bsky.social
@michaelemann.bsky.social
#accelerationism
#climate
#climatechange
#climatecrisis
#USpol
bc we obviously cant do this, i let chatGTP do it.
Open Statement from Within the U.S. #Environmental Community: Against #Collapse Ideology, Despair, and #Disinformation
1/2
@flowinguphill.bsky.social
@jksteinberger.bsky.social
@michaelemann.bsky.social
"the lack of a functioning Social Security bureaucracy makes “social security dollars” less valuable and more insecure property. Even those still receiving social security benefits have a less valuable benefit that they have to be worried could get snatched away from them for any number of reasons and they would have extremely limited recourse."
"I think that my assumption was a triumphalism and a sense of victory after the fall of the Soviet Union. But the fact that the week of the Berlin Wall falling, they were already talking about new enemies —enemies that had gone underground in certain ways or transformed in ways that were elusive — was the beginning of the rabbit hole. Because once you accept the idea that Marxism and socialism have survived and yet have changed their face, then anything can be Marxism and socialism.
I think this is how we can understand the fixation of the right wing on things like what they call “cultural Marxism” or “gender ideology” as essentially the new enemy of humanity. Because the adversary continuously changes shape, it makes them open to endless reinterpretation. There is a paranoid quality to the term. And the paranoia doesn’t really have any bounds, as I show in the book.
So I think the narrative arc comes from a feeling on the part of the libertarians, and often the racist libertarians, that they can contain their enemy in new ways by pinning it down on hierarchies of intelligence or deploying the latest findings from genetics. But by the end of the book, with a chapter on “gold bugs” and the far-right obsession with gold, there’s almost a sense of desperation or surrender to the inevitable, a failure to contain their enemies and the idea of an impending collapse and inevitable apocalypse.
(...)
What I recognize is a sort of desperation and a kind of ungoverned willingness to reach for radical remedies in a time of great peril. And as I described in the last chapter, often the rhetorical technique of the gold bug is to predict a coming apocalypse and then immediately sell you the only means there is to protect you from the worst.
I think there’s that accelerationism visible right now on the far right, certainly in the United States."
https://jacobin.com/2025/04/race-science-neoliberalism-hayek-slobodian/
"To put it bluntly, the most powerful people in the world are preparing for the end of the world, an end they themselves are frenetically accelerating."
#climate #climatechange #inequality #accelerationism
https://www.theguardian.com/us-news/ng-interactive/2025/apr/13/end-times-fascism-far-right-trump-musk
"Transgression and disruption as aesthetic ideology, from Marinetti to experimental art, counterculture, neoliberalism, and Trump" is an article originally written in Dutch by me and Nienke Terpsma. It was commissioned for, and published in, the new issue (2/2005) of the art magazine Metropolis M which is covering the 60th anniversary of the Dutch Provo movement.
The text reflects on how futurism, accelerationism, and chaos aesthetics have become part of neoliberal, right-wing libertarian, and contemporary fascist politics, but how seeming antidotes like community and care can also be hijacked.
We've made a slightly expanded and revised English version of the text available here:
http://cramer.pleintekst.nl/essays/transgression_and_disruption_as_aesthetic_ideology/