mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

17K
active users

Just got hit with:

"But can you prove ChatGPT is *not* intelligent in the human sense?"

Oh my that old chestnut. 🙄

I am not making a claim, I am merely rejecting somebody else's claim that ChatGPT is intelligent in the human sense.

The burden of proof is on whoever claims that ChatGPT is intelligent in the human sense. That proof would also need to be accompanied by a clear, unambiguous, testable definition of what "intelligent" means, that ideally includes humans but excludes calculators.

Saying "LLMs are intelligent because they can learn" is like saying "computer programs have legs because they can run." :ablobcatcoffee:

@rysiek

That's a pithy saying but pretty shallow. It's more accurate to say intelligence is a spectrum. A calculator is intelligent in that it processes symbols in a coherent fashion. #LLMs recognize patterns in mountains of data and statistically mimic them.

So, a more accurate pithy statement: "#LLMs are intelligent because they learn is like saying chameleons can act b/c they roleplay their environment."

Yes, both are true, to the same degree.

@PixelJones

> That's a pithy saying but pretty shallow.

Well, in both cases a (somewhat useful) metaphor is taken literally, and then used to build conclusions on. Programs do not *literally* run, just as LLMs do not *literally* learn (in the human sense).

> It's more accurate to say intelligence is a spectrum.

You might want to research a bit the history of that way of thinking about intelligence. You might find some disturbing stuff. You can start here:
youtube.com/watch?v=P7XT4TWLzJ

@rysiek
I have no idea how you think that the idea that "intelligence is a spectrum" leads to eugenics & belief in the Rapture of the Nerds.

Because I'm a humanist, I think we should neither overhype #AI developments or dismiss them as harmless.

Michał "rysiek" Woźniak · 🇺🇦

@PixelJones oh I am not dismissing them as harmless. Quite the contrary!

I am only dismissing the hype that is being generated around them based on their purported "intelligence" and the whole "superintelligent AI" boogeyman used to deflect and distract from real, already realized dangers with these systems.

As a humanist myself, I strongly believe words *matter*, and calling something "intelligent" is a very strong claim that requires very strong proof.

@PixelJones

> I have no idea how you think that the idea that "intelligence is a spectrum" leads to eugenics

If intelligence is a spectrum, and if individual humans can be put on that spectrum, there is just one or two small steps towards "well only the most intelligent humans should reproduce". And the devil is always in the details of who defines what "intelligent" means and decides how to test for it.
nea.org/advocating-for-change/
wellcomecollection.org/article

www.nea.orgThe Racist Beginnings of Standardized Testing | NEAFrom grade school to college, students of color have suffered from the effects of biased testing.

@PixelJones so it should come as no surprise that those systems, once deployed, very often end up displaying (among others) racist biases. This has been shown over and over and over again, including with ChatGPT, as much as OpenAI is trying to paint over it.

qz.com/1427621/companies-are-o
insider.com/chatgpt-is-like-ma

And that, combined with the power of capital that is thrown behind these systems today, is genuinely dangerous. Whole "are they intelligent" thing is just smoke and mirrors, a distraction.

Quartz · Companies are on the hook if their hiring algorithms are biasedBy Dave Gershgorn

@PixelJones in other words, people making claims like "intelligence is a spectrum" and "GPT has sparks of intelligence"[1] happen to also be the people producing tools that have proven racist biases.

Meanwhile, people who attempt to shed light on why these racist (and other) biases end up in these LLMs, get fired from companies making them.[2]

So yeah, I am far from ignoring the actual dangers related to these systems. :blobcatcoffee:

[1] nitter.net/emilymbender/status
[2] wired.com/story/google-timnit-

@rysiek Again, you're arguing against positions I don't hold.

Absolutely, #AI is rife w/ biases & potential dangers. That doesn't mean that current systems haven't moved up a "spectrum" of intelligence or that placing them on that spectrum is endorsing them or prioritizing them over human values.

Besides, any "spectrum" of intelligence is subjective and multidimensional. Putting humans, animals, machines on some scale should not, must not, equate to their value or worthiness of survival.

@PixelJones that is a framing I can work with, even though I still don't agree with putting any of these systems anywhere on the "spectrum of intelligence".

I do not believe it is justified to do so: LLMs are just probabilistically generating text, that to me is a far cry from anything that could be called "intelligence". It's very mechanistic, even if it is quite complicated under the hood.

@PixelJones I also do not believe it is *useful* to ascribe these systems intelligence in any sense of the word.

In fact, I believe there is ample data to the contrary: ascribing any sense of "intelligence" to these systems immediately fuels the hype and makes reasoned conversation about what they are, how they can be used, what are the dangers related to them, and so on, much more difficult.

It just muddies the waters and enables snake-oil salesmen to profit off of the confusion.

@PixelJones @rysiek Allow me to chime in a bit.
0.: There is no commonly agreed definition of "intelligence", so this entire discussion has very weak foundations. (Most experts would agree that processing symbols in a coherent fashion is not a good definition.)
1.: While there is also no commonly agreed definition of "consciousness", many believe that it is a pre-requisite of "intelligence".
2.: Perhaps there is a "spectrum of intelligence", which may include various animals, or even plants. ->

@PixelJones @rysiek I think completely deterministic systems (calculators, ChatGPT, and other neural networks, any deterministic digital algorithm) cannot possibly score more than rocks or hammers on this spectrum.

For a computer system to be "intelligent", the barest minimum would be for it to have some form of feedback loop and the ability to self-modify (~"consciousness"), and the ability to choose (~"free will"). We are extremely far from creating anything like this.

@szakib

> I think completely deterministic systems (…) cannot possibly score more than rocks or hammers on this spectrum.

I tend to agree.

On a broad philosophical note: perhaps eventually we *will* understand human brains in their full complexity, and become able to fully explain them as completely deterministic systems.

We will then face the difficult task of squaring this with our notions of intelligence and consciousness.

But we are not there yet, not even close.

@PixelJones

@szakib so *assuming* that brains are completely deterministic systems and then basing other strong claims ("ChatGPT is intelligent!") on that assumption is… well, let's just call it "unwarranted".

Anyway, thank you for chiming in!

@PixelJones

@rysiek @PixelJones There is a theory (IMO likely to be true) that there are quantum effects going on in the brain. If this is proven, it would prove it to be non-deterministic. (Also, it would be a big step towards proving we have free will!)

This is a fascinating topic and I'm sure it will keep many great minds busy for a very long time.

@szakib absolutely! And I am so here for it.

I just wish we could be having that conversation instead of "is a probability distribution over sequences of words intelligent".

@PixelJones