mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

13K
active users

#aisalami

0 posts0 participants0 posts today

> In an analysis of nearly 5,000 summaries created by popular AI models—including versions of ChatGPT, Claude, LLaMA, and DeepSeek—researchers found that many systems consistently exaggerated the conclusions of original research papers. The problem wasn’t just occasional sloppiness. In some newer models, overgeneralization happened in up to 73% of summaries. That’s a big concern when people rely on these tools to explain health research, inform policies, or guide classroom learning.
royalsocietypublishing.org/doi
#SalamiAI #AISalami #LLMsAndScience

"I read a really great phrase recently that said something along the lines of 'why would I bother to read something someone couldn't be bothered to write' and that is such a powerful statement and one that aligns absolutely with my views."
..
"If you want to know why a decision is made, we will need humans. If we don't care about that, then we will probably use AI," he says.
...
"Even when you do a Google search it includes an AI overview, while some emails have a topline summary, So now it almost feels like we have no control. How do I turn all that off? It's snowballing."
bbc.com/news/articles/c15q5qzd…
#SabineZetteler
#AISalami #SalamiAI
Sabine Zetteler in a blue shirt against a green background
www.bbc.comThe people refusing to use AIWorried about the environment and the loss of skills, some people are resisting the rise of AI.

> Even the $100 billion that Altman promised would be deployed “immediately” would be much more expensive than the Manhattan Project ($30 billion in current dollars) and the COVID vaccine’s Operation Warp Speed ($18 billion), rivaling the multiyear construction of the Interstate Highway System ($114 billion). prospect.org/power/2025-03-25-
#uspol #AISalami
/HT @timnitGebru

The American Prospect · Bubble TroubleAn AI bubble threatens Silicon Valley, and all of us.
Replied to bsmall2

🧵
> The more honest diagnosis would be that the responses are the result of a broken political system that offers no real way for people to have their healthcare grievances addressed—but that would call not for scolding screwed-over patients, but rather demanding political reform that challenges entrenched political and corporate interests that the Times has little interest in challenging.
fair.org/home/nyt-panics-over-
#UnitedHealthCare #uspol #AISalami #AgenticShift #AIinMedicine
@bsmall2@fedibird.com

Replied in thread

.> I love that the Writers Guild of America, as part of their strike negotiations, prevented AI from writing scripts because you can absolutely imagine Hollywood grinding out kind of regurgitated versions of old stories so they don't have to pay or deal with writers. God knows Hollywood is regurgitating mediocre stories already. They don't really need help with that.
#RebeccaSolnit
#Hollywood #ChatGPT #AISalami #AI

@bsmall2@mstdn.jp

.> Timnit Gebru, founder of the Distributed AI Research Institute (DAIR), described the use of AI in debt collection as "punishing those who are already struggling."
.> "In a time when income inequality is off the charts, when we should be reducing things like student debt, are we really trying to build tools to put even more pressures on those who are struggling? This would be true even if the software was working as intended," Gebru said.
.> "In addition to this, we know that there are so many biases that these LLM based systems have, encoding hegemonic and stereotypical views,” Gebru added, referring to the findings of the paper on large AI models that she co-authored with several other researchers. “The fact that we don't even know what they're doing and they're not required to tell us is also incredibly concerning."
.> Some of the companies that stand to benefit most from AI integration are those that purely exist to collect debt. These companies, known as debt buyers, purchase “distressed” debt from other creditors at steep discounts—usually pennies on the dollar—then try as hard as they can to get debtors to repay in full. They don’t issue loans, or provide any kind of service that clients might owe them for; it’s a business model built on profiting from people who fell behind on payments to someone else.
- https://www.vice.com/en/article/bvjmm5/debt-collectors-want-to-use-ai-chatbots-to-hustle-people-for-money

#AISalami #ChatGPT #DebtCollection #VultureFunds #TinmitGebru #LLM #LLMBias
www.vice.comDebt Collectors Want To Use AI Chatbots To Hustle People For MoneyThe collections industry is pushing GPT-4 as a dystopian new way to make borrowers pay up, replicating the debt system’s long history of racial bias.

... think about the purpose the utopian hallucinations about AI are serving. What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history ( #Microsoft, #Apple, #Google, #Meta, #Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.
This should not be legal...
- https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein?

#NaomiKlein on #Ai #ChatGPT #AiSalami #AiHallucinations #TechnoNecro #OnBullshit #ProprietarySoftware #ProprietaryProducts #WalledGardens
The Guardian · AI machines aren’t ‘hallucinating’. But their makers areBy Naomi Klein

.> ... large-scale AI models are indeed big water consumers. For example, training GPT‑3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough to produce 370 BMW cars or 320 Tesla electric vehicles), and the water consumption would have been tripled if training were done in Microsoft’s data centers in Asia. These numbers do not include the off-site water footprint associated with electricity generation.
.> ChatGPT needs a 500-ml bottle of water for a short conversation of roughly 20 to 50 questions and answers, depending on when and where the model is deployed. Given ChatGPT’s huge user base, the total water footprint for inference can be enormous.
.> ... if we only consider carbon footprint reduction (say, by scheduling more AI training around noon), we’ll likely end up with higher water consumption, which is not truly sustainable for AI.
.> ... the vast majority of data centers still use potable water and cooling towers. For example, even tech giants such as Google heavily rely on cooling towers and consume billions of liters of potable water each year. Such huge water consumption has produced a stress on the local water infrastructure; Google’s data center used more than a quarter of all the water in The Dalles, Ore.
.> ... some AI conferences have requested that authors declare their AI models’ carbon footprint in their papers; we believe that with transparency and awareness, authors can also declare their AI models’ water footprint as part of the environmental impact.
- The Markup: Water Footprint of AI Technology
- A conversation with
Shaolei Ren and Nabiha Syed

#TheMarkup #NabihaSyed #ShaoleiRen #AISalami #ChatGPT #CarbonFootprint #WaterFootprint #California #Oregon #DallesOregon #Virginia #DataCenterCapital #VirginiaLoudon #LoudonCounty
themarkup.orgThe Secret Water Footprint of AI Technology – The MarkupA conversation with Shaolei Ren

.> AI, say ChatGPT, adds more complexity and power to the central node, Rather than only setting the rules of engagement (between users or between wallets), it also centralizes the engagement itself. People no longer interact with each other, but with they interact individually with the AI itself. Since the AI is personalized and generative (i.e. stochastic) no two interactions will ever be the same, further isolating user from each other. While the AI depends on user interaction and open sources (as training data) its practice kills both. Not only by focussing all user attention on itself but also by cutting all references to the underlying human-generated sources (and the social relations embodied therein). For AI, sources are dissolved into training data, no longer individual documents with meaning, contexts and histories, but dividual latent patterns.
- https://felix.openflows.com/node/5579

/HT
@festal@tldr.nettime.org
#FelixStalder on #AISalami #ChatGPT #TheInternet
felix.openflows.comAI as centralizing and distancing technology | n.n. -- notes & nodes on society, technology and the space of the possible, by felix stalder

.> Tl;dr: The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.
.> While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as "Stochastic Parrots"), such as "provenance and watermarking systems to help distinguish real from synthetic" media, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined "powerful digital minds" with "human-competitive intelligence." Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today.
- https://www.dair-institute.org/blog/letter-statement-March2023

#AISalami #TinmitGebru #EmilyBender #EmilyMBender on AI problems as more of #TheCorporation problem...
www.dair-institute.org

.> Renewables are already the cheapest form of generation, so generators do not need market pull to install more capacity: to maximise their profit, they will maximise their renewables capacity. Even when the generation is on-site, the argument still stands: the electricity use to power AI could be traded on the grid...
#WimVanderbauwhede
/HT
@wim_v12e@scholar.social
#FrugalAI #AISalami #FrugalComputing #ICT #FrugalCT
#AvoidClimateChaos #AvoidExtinction
Continued thread

.> ... as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”
.> The theory that apples fall to earth because that is their natural place (Aristotle’s view) is possible, but it only invites further questions. (Why is earth their natural place?) The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall. True intelligence is demonstrated in the ability to think and express improbable but insightful things.
.> True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism). To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content. But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.
.> ... In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial — that is, important — discussions. It sacrificed creativity for a kind of amorality.
...
.> Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.
.> In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
#Chomsky, #NoamChomksy #IanRoberts and #JeffreyWatumull on #ChatGPT #AISalami as #Eichmann, #AdolfEichmann #Bureaucrat #BanalityOfEvil in
#NYT, #NewYorkTimes March 8, 2023

.> The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations...
.> Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.
.> The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws. Of course, any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered...
.> But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility.
.> But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility.
.> Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.
.> For this reason, the predictions of machine learning systems will always be superficial and dubious.
- https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html

#NoamChomsky on #MachineLearning #ChatGPT #StatisticalModels of #ProbabilisticIntelligence #Intelligence
#AiSalami
The New York Times · Opinion | Noam Chomsky: The False Promise of ChatGPTBy Noam Chomsky

.> As an instrument for organizing large quantities of information, or performing extremely complex symbolic operations beyond human capabilities within a normal lifespan, the computer is an invaluable adjunct to the brain, though not a substitute for it. Since the computer is limited to handling only so much experience as can be abstracted in symbolic or numerical form, it is incapable of dealing directly, as organisms must, with the steady influx of concrete, unprogrammable experience. With respect to such experience, the computer is necessarily always out of date. The computer's lack of other human dimensions is of course no handicap to it as a labor-saving device, whether in astronomy or bookkeeping: but such creativity as the computer may simulate is always in the first place a contribution of the minds that formulate the program.
.> The utter absence of innate subjective potentialities in the computer makes the contemporary art exhibition shown here (top), in all its pervasive blankness and artful nullity, and ideal representation of its missing dimensions. Those who are so fascinated by the computer’s lifelike feats---it plays chess! it writes ‘poetry’!---that they would turn it[AISalami] into the voice of omniscience, betray how little understanding they have of either themselves, their mechanical-electronic agents, or the potentialities of life. A city of even three hundred thousand people, ten per cent of whom have access to regional or national libraries with as few as a million volumes, would actually have a total capacity for storing, transforming, integrating, and not least applying both symbolic information and concrete experience that no computer will ever rival.
If we had all been exposed to #LewisMumford in #MythOfTheMachine #PentagonOfPower on #Computerdom since 1970 we would have been iummunized against #AIHype for #AISalami. Maybe we can blame it on Jimmy Carter and the Trilateral Commission for debasing school education in the 70's.