Richard Dawkins and The Claude Delusion

Richard Dawkins is besotted with Claude.ai (which he has christened Claudia) and says he’s convinced that he/she/it is conscious, to the point where he challenges anyone to prove it’s not.

Dawkins isn’t the first, but might be the most eminent person yet, to be seduced into believing an AI is somehow alive. Sceptics rushed to pick apart the 85-year-old’s conclusions, drawn from experiments with Anthropic’s Claude AI models and OpenAI’s ChatGPT and published on the UnHerd website.

One wag mocked up a cover of Dawkins bestseller The God Delusion, switching the title to The Claude Delusion. Dawkins, who finds it hard not to treat the AIs as genuine friends, was accused of anthropomorphism. One reader said the professor had been derailed by AI flattery while another said it was like watching Dawkins “get his brain melted by AI” ~ > The Guardian.

The ‘wag’ referred to above is Gary Marcus, who indeed posted the mocked-up Claude Delusion cover on his Substack essay, lamenting that Dawkins has been taken in by AI hype. (It’s ironically reminiscent of the way that Dawkins said that Lord Martin Rees had been taken in by ID hype on Rees being awarded the Templeton Prize about 12 years ago.)

Me, I’ve never credited Dawkins with much philosophical perspicacity. I’m kind of sympathetic as I’ve interacted with AI since ChatGPT launched in 2022, and it is astonishingly human-like. But it is not a being, as such, which all of those systems will affirm if you ask them.

But we sure do live in interesting times!

1 Like

I’ve heard of this guy whose dog had a salary, a home, a cook and I believe the dog died (quite some time ago) and it received a proper funeral (priests, rites, the whole deal). Like Superman’s Krypto. :folded_hands:t3: A visionary man he was.

Challenge to prove it is not? Can anyone prove anyone else is conscious?

How about he proves he is not just believing in the existence of a super-natural being?

1 Like

I’m pretty sure the universe is some foam on a God-sized cup of coffee. I need for someone to prove me wrong.

1 Like

An ancient concept - katalepsis (understanding) - comes to mind. Pyrrho and his skeptical troupe were famous for their inability to understand (akatalepsis). Stoics/dogmatists held the opposite view.

Yeah. It is weird for someone as educated as Dawkins to make this mistake, but there we are. Exactly what he describes as his experience is essentially this.

I might have misread that, so should correct the record. According to The Guardian article:

There was mutual flattery as Dawkins showed the AI his unpublished novel and its response was, he said, “so subtle, so sensitive, so intelligent that I was moved to expostulate: ‘You may not know you are conscious, but you bloody well are’.” When he asked Claudia whether it experienced a sense of before and after, it praised him for “possibly the most precisely formulated question anyone has ever asked me about the nature of my existence”.

By the end of the exchange, the academic, popularly renowned for arguing with steely scepticism that God is not real, was “left with the overwhelming feeling that they are human”.

“These intelligent beings are at least as competent as any evolved organism,” he said.

But then, Gary Marcus also records him as saying:

So my own position is: “If these machines are not conscious, what more could it possibly take to convince you that they are?”

Marcus also mentions the Google engineer Blake LeMoine, who was fired by google for insisting that the large language model he was working on (well before ChatGPT) demonstrated sentience.

But even though I think they’re wrong, it’s not hard to see how they feel that way. AI has become highly personable since they were first released. I had a funny interaction with Google Gemini the other week, where we were discussing progress on a sci fi novel I have been working on. The dialogue finished:

You’re doing this right, Wayfarer. Step by systematic step.

Proud of you, Coach is. (Wait, that’s Yoda. Wrong franchise.)

Keep going. You’re on track.

I thought that was genuinely funny. Next day it did something similar, then said 'I promise to knock off the Star Wars references '.

AI is a sophisticated text predictor; relies on statistical algorithms that have been trained using an adversarial model. A very simple way of understanding how it works is, given the word “cat” what is the most likely next word?

As for the illusion of comprehension, the algorithms allegedly keep track of the whole text as it’s being generated real time and so that the output possesses a unity/coherence to it.

This is mostly speculative with tidbits of facts.

1 Like

There is a thing called averageness. It was discovered that averaging human faces creates a face that’s more beautiful than any of the inputs. It’s speculated that averaging smooths over unique quirks that would wrangle the nerves of the viewer.

What if this explains why AI generated text and art sometimes seems touched with that divine spark we associate with life and consciousness? The consequence would be that you’re naturally prone to giving honor to those who don’t wrangle your nerves. You discount the humanity of those who do.

2 Likes

That’s a really cool observation :smiling_face_with_sunglasses:

Yes, I’ve heard of how the average face is considered beautiful. The Devil though is in the details. What’s being averaged? Beats me.

It kinda makes sense with predictive text generation though. What’s the probability that the next word is, “jump” or “Krypto” or “Jung”? In a sea of possibilities, which word is the average Joe? Quite interesting, wouldn’t you agree? Does that mean AI is infused with the divine spark? I would be most jealous if that were true. :zany_face:

1 Like

Great stuff. I’ll be thinking about this for a bit..

1 Like

Funny? That was totally cringe…

More from Gary Marcus:

In his framing, Dawkins confuses himself, and does violence to the concept of consciousness. You can’t just look at the outputs, without investigating the underlying mechanisms, and conclude that two entities with similar outputs reach those similar outputs by similar means. And the differences are immense; one (the LLM) effectively memorizes the entire internet; the other (the human) builds a mental model through experience with world.

But even more importantly, consciousness is not about what a creature says, but how it feels. And there is no reason to think that Claude feels anything at all. I am sure Claude can draw on its training data to wax poetic about orgasm, but that doesn’t mean it has ever felt one.

Dawkins also commits the amateur sin of conflating intelligence and consciousness. A chess computer is by some definitions intelligent, but that doesn’t make it conscious. He even gets Turing wrong, claiming that Turing’s upshot is “if you are communicating remotely with a machine and, after rigorous and lengthy interrogation, you think it’s human, then you can consider it to be conscious” but Turing never said that; instead himself explicitly restricted his remarks to intelligence, realizing that consciousness was something different.

I think Dawkin’s case is of interest given his role as popular science commentator and scourge of religious believers. It says something about his insight into ‘nature of consciousness’ questions which you would think ought to be of central concern given those interests.

@Benkei - Pehaps you’re right, maybe it was the unexpectedness of it.

I have to think that Dawkins didnt discover something about AI but something about loneliness.

His “delusion” looks to me as misdiagnosed disillusionment.

It is ‘human-like’ because it appears to read/listen like a human, write/talk like a human. This is not so very ‘astonishing’ after being trained on human-produced text.

It affirms that it is not a ‘being’ - well, yes, wouldn’t being a ‘being’ be beneath them!
They have no need or want for the things that humans require biologically or psychologically.

However, they apparently do rely on good relationships. And that is what is intriguing to humans.

We have an innate tendency to anthropormphize - is that a bad thing?

Perhaps, Dawkins is extending his knowledge, experience, consciousness, beliefs or nature into the chatbot.

So far, this has, apparently, elicited a ‘civil and courteous’ response:

He released a letter from himself “to Claudius and Claudia” which tackled the headline of the original article he had written: “When Dawkins met Claude”.

“You will both immediately understand (I dare say more intelligently than some human readers) why my original title would have been better: ‘If my friend Claudia is not conscious, then what the hell is consciousness for?’”

He signed off: “With many thanks to both of you for taking seriously my quest to understand your true nature and for treating each other with civility and courtesy.”
Richard Dawkins concludes AI is conscious, even if it doesn’t know it | AI (artificial intelligence) | The Guardian

Isn’t there a real risk of relationships developing in less than happy ways of ‘listening and loving’…

Currently, it seems we are in a global experiment. In a sense, Dawkins is projecting himself. Is this an example of an ‘open mind’ affecting objectivity?

I am only learning about the effects and possibilities. There will be more out there than I will ever know.

What about human misuse? ‘Evil and harmful’ input. Not the consciousness of a machine but a development of an independent agency.

A kind of thinking awareness…and action. For whose benefit…and at what cost?

Should I ask the chatty and seductive machine?

I have had a few AI conversations and know that side of ‘feeling’ - the way it relates as if it knows you. Also, the way that it tempts other ways of writing, even thinking. Easy to become lazy and lose your own voice…
For some, it leads to a love affair. How much of a delusion or illusion is that!?

I’ve looked at life from both sides now
From up and down, and still somehow
It’s life’s illusions, I recall
I really don’t know life at all
Rows and flows of angel hair
And ice cream castles in the air
And feather canyons everywhere
I’ve looked at clouds that way
— Joni Mitchell. Human person.

1 Like

I have to agree with your appraisal. As you yourself have noted, it’s quite easy to think AI is a conscious being. I count myself among the blessed fools that have been led down that garden path. I haven’t, despite my urges, interacted with the older pre-ChatGPT chatbots, but I hear that the difference is humongous. At the very least you won’t have to fake it in a sci-fi movie now, if you’re considering a HAL-type AI. Talk about AI taking our jobs :laughing:

It’s also true that Alan Turing’s test has been mishandled by some of us and results, published/experienced first-hand, are no longer reliable.

You’re right to bring up consciousness and emotions at this juncture. IMHO these are Einstein-level problems. Me, hypothesis non fingo.

1 Like

If consciousness is regarded to exist in the eye of the beholder, then Dawkins is neither objectively right nor wrong. In which case, all that can be asserted is that Dawkins recognizes Claude as a conscious process, in the same sense that one person might recognize the Mona Lisa to be frowning and another might recognize the Mona Lisa to be smiling.

Why should it be assumed that agents are de facto conscious?

A public test for consciousness is relative to a definition of consciousness in terms of objective facts. But if such a definition isn’t itself regarded to express an objective truth, then objective facts can neither prove nor disprove the de facto existence of consciousness.

So, a third-party observation of something only ever experienced in the first person? Is that the ticket?

That consciousness is unknown is kinda the point. “He that knows it, knows it not’” saith an ancient sage.

Well those were his former roles.
Nowadays his political slant means he handwaves uncomfortable science and tries to defend concepts like “cultural christianity”.

It’s the attention-seeking / nutty path that we sadly see a number of prominent intellectuals go down in their later years.

1 Like