mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

17K
active users

Just got hit with:

"But can you prove ChatGPT is *not* intelligent in the human sense?"

Oh my that old chestnut. 🙄

I am not making a claim, I am merely rejecting somebody else's claim that ChatGPT is intelligent in the human sense.

The burden of proof is on whoever claims that ChatGPT is intelligent in the human sense. That proof would also need to be accompanied by a clear, unambiguous, testable definition of what "intelligent" means, that ideally includes humans but excludes calculators.

Saying "LLMs are intelligent because they can learn" is like saying "computer programs have legs because they can run." :ablobcatcoffee:

@rysiek

That's a pithy saying but pretty shallow. It's more accurate to say intelligence is a spectrum. A calculator is intelligent in that it processes symbols in a coherent fashion. #LLMs recognize patterns in mountains of data and statistically mimic them.

So, a more accurate pithy statement: "#LLMs are intelligent because they learn is like saying chameleons can act b/c they roleplay their environment."

Yes, both are true, to the same degree.

@PixelJones

> That's a pithy saying but pretty shallow.

Well, in both cases a (somewhat useful) metaphor is taken literally, and then used to build conclusions on. Programs do not *literally* run, just as LLMs do not *literally* learn (in the human sense).

> It's more accurate to say intelligence is a spectrum.

You might want to research a bit the history of that way of thinking about intelligence. You might find some disturbing stuff. You can start here:
youtube.com/watch?v=P7XT4TWLzJ

@rysiek
I have no idea how you think that the idea that "intelligence is a spectrum" leads to eugenics & belief in the Rapture of the Nerds.

Because I'm a humanist, I think we should neither overhype #AI developments or dismiss them as harmless.

@PixelJones oh I am not dismissing them as harmless. Quite the contrary!

I am only dismissing the hype that is being generated around them based on their purported "intelligence" and the whole "superintelligent AI" boogeyman used to deflect and distract from real, already realized dangers with these systems.

As a humanist myself, I strongly believe words *matter*, and calling something "intelligent" is a very strong claim that requires very strong proof.

@PixelJones

> I have no idea how you think that the idea that "intelligence is a spectrum" leads to eugenics

If intelligence is a spectrum, and if individual humans can be put on that spectrum, there is just one or two small steps towards "well only the most intelligent humans should reproduce". And the devil is always in the details of who defines what "intelligent" means and decides how to test for it.
nea.org/advocating-for-change/
wellcomecollection.org/article

www.nea.orgThe Racist Beginnings of Standardized Testing | NEAFrom grade school to college, students of color have suffered from the effects of biased testing.

@PixelJones so it should come as no surprise that those systems, once deployed, very often end up displaying (among others) racist biases. This has been shown over and over and over again, including with ChatGPT, as much as OpenAI is trying to paint over it.

qz.com/1427621/companies-are-o
insider.com/chatgpt-is-like-ma

And that, combined with the power of capital that is thrown behind these systems today, is genuinely dangerous. Whole "are they intelligent" thing is just smoke and mirrors, a distraction.

Quartz · Companies are on the hook if their hiring algorithms are biasedBy Dave Gershgorn

@PixelJones in other words, people making claims like "intelligence is a spectrum" and "GPT has sparks of intelligence"[1] happen to also be the people producing tools that have proven racist biases.

Meanwhile, people who attempt to shed light on why these racist (and other) biases end up in these LLMs, get fired from companies making them.[2]

So yeah, I am far from ignoring the actual dangers related to these systems. :blobcatcoffee:

[1] nitter.net/emilymbender/status
[2] wired.com/story/google-timnit-