mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

17K
active users

Just got hit with:

"But can you prove ChatGPT is *not* intelligent in the human sense?"

Oh my that old chestnut. 🙄

I am not making a claim, I am merely rejecting somebody else's claim that ChatGPT is intelligent in the human sense.

The burden of proof is on whoever claims that ChatGPT is intelligent in the human sense. That proof would also need to be accompanied by a clear, unambiguous, testable definition of what "intelligent" means, that ideally includes humans but excludes calculators.

Saying "LLMs are intelligent because they can learn" is like saying "computer programs have legs because they can run." :ablobcatcoffee:

"You can't prove human brains are different than LLMs!"

A human brain is a biological organ. An LLM is a probability distribution over sequences of words.

There are very few things that can be *more different* than human brains and LLMs.

I mean seriously, this whole "GPT is intelligent/sentient" stuff is just sparks of eristic.

@rysiek Unfortunately LMMs nicely masks as intelligent due to text limitations. Although if you know where to apply pressure, the illusion breaks. These limitations become much more apparent in other applications like playing Minecraft.

About a year ago OpenAI released Video Pre-Trained model that was able to craft a diamond pickaxe. A nice case for VPT but no one was saying that AI solved Minecraft, a vastly easier task than mastering text or driving a car.

@PiTau

> Unfortunately LMMs nicely masks as intelligent due to text limitations.

Oh snap, this is a great way of putting it! Hadn't thought about this aspect of the whole thing — the limited "domain" in which these models operate, so to speak, that makes it easier for people to not notice their deficiencies.

Thank you for pointing this out. It's one of the "ha, well obviously!" type things once somebody says it out loud.

@rysiek it's not the domain problem, as I've written if proper pressure applied the illusion breaks. ChatGPT is like a magic show. Meticulously prepared stage and planned out act to fool ones perception. However magicians are honest about their act, whereas ChatGPT is not. How many people come out of Pen & Teller show thinking these guys really can catch a bullet in their teeth?

But LLM lie is much worse, because capital seems to believe and go with it.

@PiTau by "limited domain" I meant "it's text-only, it operates on text", which is a limited form of communication. I didn't mean any specific domain of human knowledge, that's why "domain" is in quoted, that's why I wrote "so to speak".

I like this framing because it helps explain how/why people fall for the "GPT is intelligent" ruse.

Just like parlor magicians dimming the lights to help hide the mechanics of their acts, GPT being limited to text only limits the ways illusion might break.

@PiTau totally a-greed (see what I did there?) on the capital angle. Wrote about it at length for Polish media.

PiTau

@rysiek It's good to see a proper coverage of tech giants push for AI regulation and the case for smaller models in polish. Shame such articles have smaller reach and impact than needed.