Just got hit with:
"But can you prove ChatGPT is *not* intelligent in the human sense?"
Oh my that old chestnut.
I am not making a claim, I am merely rejecting somebody else's claim that ChatGPT is intelligent in the human sense.
The burden of proof is on whoever claims that ChatGPT is intelligent in the human sense. That proof would also need to be accompanied by a clear, unambiguous, testable definition of what "intelligent" means, that ideally includes humans but excludes calculators.
@rysiek an easy one is that neural networks are a simplified model of neurons , otoh if you believe in substrate less intelligence and consciousness the argument gets more muddy?
@fleeky a model is not the thing it models. A map is not the territory.
Moving a mountain on a map does not mean a mountain actually moved in the territory.
And I am not even getting into how hilariously simplified the neural network model is compared to the actual brain — suffice it to say it completely ignores all the biochemistry and all the stuff actual neurons float about in.
@rysiek the surprising thing about neural networks and LLMs is one of complexity and emergence.. what we are debating right now is did the mountain actually move or is it just a philosophical illusion?
I still am a fan of neuro symbolic systems as a necessary part of the path for thinking machines but at the same time I think computation is simplified thought, otoh the whole debate gets very deep very fast..
@fleeky sure, plus there is the whole layer of semantics and imperfect models and all that.
And then: ethical dilemmas — if we want to claim ChatGPT is actually, literally intelligent, which would imply self-awareness and curiosity, should we ask if it suffers? Is shutting down an older model akin to killing an intelligent being? And so on.
We should absolutely be having these conversation, because it is genuinely fascinating. Which is another reason why I loathe the discourse around AI today.
@rysiek images from a talk by Josha Bach . I love this talk because he gives a concrete model of what he thinks consciousness is. When I look at it, it seems to be workable to the point that you could implement it within chatgpt or even Minecraft, but then I wonder, is this just then a constructed philosophical zombie? I honestly still don't know but at least if this got implemented we could all test the resultant agent.
@fleeky ah yes, saw two of his talks. They are absolutely fantastic. I need to re-watch them!
@rysiek I went to a salon he held at a cafe in Berlin and one thing that I found interesting about him is how he simultaneously has a stance for complexity and emergence while at the same time he has an almost instinctual compulsion to put things into hierarchies.
@fleeky @rysiek I love how the physical world is a brain in that diagram.
The brain should be surround by senses, emotions and physical urges. Nobody goes through life as a brain. Most of our choices are down to what stimulus gets our attention. So what food we like, what people we like, what music we like etc.. If we really used our brains, advertising as we know it wouldn't exist. We are first and foremost a sensory animal, with a biological computer to help us with the task of reproduction. Model that in your binary instructions.
@fleeky @rysiek okay. It still seems to be missing something for me. Maybe it's that old chestnut of removing the messy complicated bits from a system in order to simplify it for the purpose of creating a model. Theoretical Physicists do this a lot. Mathematicians even more. They end up with a model of something, but as someone wise once pointed out, the map is not the territory. There is no 'perfect gas' or 'frictionless surface'. Just as there is no mind without a body. Unless... lol.
@rysiek also here's a random interesting pdf you may enjoy checking out the summary https://linas.org/misc/Forced_Moves_or_Good_Tricks_in_Design_Sp.pdf
Also for anyone into ai http://linas.org/ Lina's vespitas is one of the most fascinating people to talk to about it!
Also the readme for his learning project is full of fascinating ideas
https://github.com/opencog/learn
The ethical stuff seems like where the rubber meets the road, to me; the difficult practical part, as opposed to the purely theoretical parts ("what do we really mean by intelligence?") and the easier practical parts (things where you can just do an experiment and see if it i.e. can adequately answer real customer questions (probably not)).
Should we ask it if it suffers? We can, of course, but it will not give a consistent answer. From which we can probably conclude that it doesn't, or at least that we don't have any reason to think it does.
If we had a system that was "LLM plus some other stuff", and it did claim to suffer when people say mean things to it, and it did so consistently, at what point would we be morally obliged to believe it? I do think that's an interesting question, and I'm not sure how to answer it.
People tend not to talk about what Searle's Chinese Room (or Bender's Thai speaker (or not)) actually say, in detail. Do they lie and claim to be humans of a particular age etc? Do they claim perceptual abilities that they don't actually have? LLMs often do these things, for obvious reasons.
But what if a piece of software says “No, I can’t see or hear, the only perception that I have is in the form of words that come into my consciousness; I know about sight and hearing and so on in theory, from words that I’ve read, but I haven’t experienced them myself; still, I’m definitely in here, and as self-aware as you are!”
When do we dismiss that, and when do we not? I wrote a little here fwiw: https://ceoln.wordpress.com/2023/07/02/the-problem-of-other-minds-arises/