Just got hit with:
"But can you prove ChatGPT is *not* intelligent in the human sense?"
Oh my that old chestnut.
I am not making a claim, I am merely rejecting somebody else's claim that ChatGPT is intelligent in the human sense.
The burden of proof is on whoever claims that ChatGPT is intelligent in the human sense. That proof would also need to be accompanied by a clear, unambiguous, testable definition of what "intelligent" means, that ideally includes humans but excludes calculators.
Saying "LLMs are intelligent because they can learn" is like saying "computer programs have legs because they can run."
"You can't prove human brains are different than LLMs!"
A human brain is a biological organ. An LLM is a probability distribution over sequences of words.
There are very few things that can be *more different* than human brains and LLMs.
I mean seriously, this whole "GPT is intelligent/sentient" stuff is just sparks of eristic.
@rysiek yes but the probability distribution is /implemented/ as a neural network
a neural network with no memory or crossfeed or backfeed
you win this round, facts
@alilly oh no not facts! My one weakness!
"Neural network" in this case is a metaphor. Just like "running" in the context of a computer "running" programs. It's a neat way of modeling/thinking about what's going on there, but it has almost nothing in common with actual neural networks in biological brains.
I assume you knew this, of course, but spelling it out for anyone who might want to pontificate about "them neurons" in the replies.
@rysiek (for clarity, the last line is "me" yielding to the facts)
@rysiek Unfortunately LMMs nicely masks as intelligent due to text limitations. Although if you know where to apply pressure, the illusion breaks. These limitations become much more apparent in other applications like playing Minecraft.
About a year ago OpenAI released Video Pre-Trained model that was able to craft a diamond pickaxe. A nice case for VPT but no one was saying that AI solved Minecraft, a vastly easier task than mastering text or driving a car.
> Unfortunately LMMs nicely masks as intelligent due to text limitations.
Oh snap, this is a great way of putting it! Hadn't thought about this aspect of the whole thing — the limited "domain" in which these models operate, so to speak, that makes it easier for people to not notice their deficiencies.
Thank you for pointing this out. It's one of the "ha, well obviously!" type things once somebody says it out loud.
@rysiek it's not the domain problem, as I've written if proper pressure applied the illusion breaks. ChatGPT is like a magic show. Meticulously prepared stage and planned out act to fool ones perception. However magicians are honest about their act, whereas ChatGPT is not. How many people come out of Pen & Teller show thinking these guys really can catch a bullet in their teeth?
But LLM lie is much worse, because capital seems to believe and go with it.
@PiTau by "limited domain" I meant "it's text-only, it operates on text", which is a limited form of communication. I didn't mean any specific domain of human knowledge, that's why "domain" is in quoted, that's why I wrote "so to speak".
I like this framing because it helps explain how/why people fall for the "GPT is intelligent" ruse.
Just like parlor magicians dimming the lights to help hide the mechanics of their acts, GPT being limited to text only limits the ways illusion might break.
@PiTau totally a-greed (see what I did there?) on the capital angle. Wrote about it at length for Polish media.
@rysiek It's good to see a proper coverage of tech giants push for AI regulation and the case for smaller models in polish. Shame such articles have smaller reach and impact than needed.
@rysiek and besides sparks also unbelievable amounts of cargo cult
@rysiek The brain is the result of half a billion years of evolution. My fish are vastly more intelligent than LLMs.
We haven't even cracked "basic" animal intelligence.
"LLMs are sentient!"
No, not really, they aren't. Have you ever used Predictive Text Input? While the exact inplementation is different, Large Language Models are operating in a very similar manner: they simply predict the next word.
"But Predictive Text is so dumb! And LLMs are smart!"
The difference is size. Phones can handle database of hundreds of kilobytes, and the LLMs are usually in tens of gigabytes.
@rysiek does anyone actually claim this, that you can't prove human brains are different from LLMs?
@peter_ellis yeah, basically you get some form of that whenever you debate anyone proposing that LLMs are intelligent. Sooner or later in the discussion they will reach for some form of that "argument".