mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

12K
active users

Michał "rysiek" Woźniak · 🇺🇦

Does ChatGPT gablergh?
rys.io/en/165.html

> “Well you can’t say it doesn’t think” — the argument goes — “since it’s so hard to define and delineate! Even ants can be said to think in some sense!”

> This is preposterous. Instead of accepting the premise, we should fire right back: “you don’t get to use the term ‘think’ unless you first define it yourself”. And recognize it for what it is — thinly veiled hype-generation attempt using badly defined terms for marketing.

Songs on the Security of NetworksDoes ChatGPT gablergh?Imagine coming across, on a reasonably serious site, an article that starts along the lines of: After observing the generative AI space for a while, I feel I have to ask: does ChatGPT (and other LLM-

This blogpost was inspired by a short discussion with @brandon:
mstdn.social/@brandon@ioc.dev/

I want to be clear that it's not meant to be a subtoot, and I don't think he makes that preposterous argument himself!

It's just that that particular conversation dislodged something in my brain and helped me understand a thing about the "ChatGPT can think" discourse.

Updated the blogpost. Less confusing now, I hope!

Does gablergh?
rys.io/en/165.html

> Imagine coming across an article that starts with:
>
> "After observing the generative AI space for a while, I feel I have to ask: does ChatGPT actually gablergh?"
>
> [Y]ou would expect the author to define the term “gablergh” and provide some relevant criteria for establishing whether or not something “gablerghs”.
>
> Yet when hype-peddlers claim LLMs “think”, nobody demands that of them

Songs on the Security of NetworksDoes ChatGPT gablergh?Imagine coming across, on a reasonably serious site, an article that starts along the lines of: After observing the generative AI space for a while, I feel I have to ask: does ChatGPT (and other LLM-

@rysiek hey I hope I can ask you a question, I haven't played around with GPT at all and I'm just curious, can you ask it to respond with humanlike spelling errors and grammar errors? and what does that look like? is it realistic?

@stemid you will have to check for yourself. I refuse to feed those systems with my prompts.

@rysiek oh ok, I assumed your post was about using gpt.

I'm not really that interested myself either. but I was chatting to a friend earlier today and had the idea that I wonder if it can be instructed to respond like a real human. because as far as I've seen it responds like someone reading straight from wikipedia.

@stemid my post is about the hyped up debate about whether or not LLMs can "think"

@rysiek This is one of the best articles I've seen on the topic!

@silverwizard oh my, thank you!

Now I really need to rewrite it to make it less confusing, and link to some serious pieces on the topic!

@rysiek Oh sorry! I was reacting to the video the person posted. I think the article you wrote was one of the best on the topic.

I think Friendica and Mastodon don't always get along with who the mention goes to

@silverwizard no no, everything was clear to me. Still, my blogpost could be clearer and better rounded, and it'll get there. 😄

Thank you for the positive feedback!

@rysiek I think you're kind of relying on the assumption that “not being able to clearly delineate X” is the same as “having no usable concept of X”. It isn't.

The problem with the concept of “thinking” is that it often hides essentionalist thinking about human mind. “Thinking” is whatever cognitive function we can't replicate in a machine, because it's precisely what only “real minds” can do.

There was a time when calculating moves when playing chess was thought of as thinking. Well, really, it kind of still is! When I'm teaching children to play chess, I encourage them to analyze the situation, look for possible moves and try to plan a few moves ahead, and I definitely unequivocally conceptualize this process as *thinking* about the next move.

Since computers started doing it, though, we dismiss it as purely computational.

So it's kind of a dialectic: every time computers' cognitive ability improves somewhat, it can be claimed that it now moves into the area of “thinking” — and at the same time the claim can be instantly dismissed. Because it's not that there's a threshold where cognitive ability emerges as “real thinking” — the threshold is a moving goalpost. It's always the thing beyond what computers can do.

@rysiek BTW, have I already shown you how ChatGPT can be moved to change its mind by the use of Socratic method? 🤔

(It's not reliable, though. Sometimes it works, other times it gets stuck in a very weird cognitive dissonance, claiming things like “A is identical to B, but A is different to B because it is different”.)

(Also, sex/gender topics are good for this, because that's an area where the model has a lot of wrong and easily falsifiable convictions. I haven't tried to falsify a *correct* belief in a same way to change its mind to reach *wrong* conclusions — it might be fun experiment. But then on the other hand — this is also possible with humans.)

@etam @rysiek Yeah - begging the question is one of the best ways I've seen to prove AI thinks.

@rysiek Yeah, ants like, have needs and respond to them based on their environment.

ChatGPT just performs functions. If you leave it alone, it will not do anything. It just performs a command when given a command. It's as much "thinking" as you get when you apply force to a rock and it causes the rock to move.

@rysiek People keep mistaking "it did something I did not expect it to do" to mean that it like, had some unique idea on its own, instead of it just showing the failure of the person to properly anticipate what it was programmed to do.