AI and Consciousness – WTF is going on?

References:

Ex Machina explores the mind-bending question: ‘how do we assess whether something else has consciousness?’ 

Here is Alex Garland explaining that premise.

Is AI conscious? Will AI become conscious? When will AI become conscious?

These are the big questions many are grappling with.

At this point, I think it’s fair to say that this question isn’t a very good one.

Largely, because we don’t really know what consciousness is.

It might well be a word we’re trying to use to explain something (an experience?) simply unexplainable through words.

A more productive question is – What might AI teach us about consciousness?

There are Monsters in your LLM 

This was a recent conversation with Murray Shanahan on the Machine Learning Street Talk Podcast.

Murray is a prominent AI researcher and professor of cognitive robotics at the Imperial College London; he also works with Google DeepMind.

He’s well known for his work on consciousness and AI. 

And, interestingly, was scientific advisor for the film ‘Ex Machina’. 

Stuff that’s good to know

Claude and Claude 3.5 Sonnet

An LLM AI assistant created by Anthropic. Claude 3.5 Sonnet – the newest version of Claude, by many benchmarks, it’s the best on the market. In the previous essay, I shared some interesting Anthropic/Claude updates.

The turing test

Let’s say you chat with two hidden participants; one participant is human, the other is a computer. If you can’t reliably tell the computer from the human, the computer passes the turing test. Most agree that (a) AI has passed the Turing Test, but (b) It’s a practical way to judge AI in some respects, but it doesn’t really capture the complex nature of intelligence.

The hard problem of consciousness

The difficulty of explaining how and why we have subjective experience – “hard” because,  even if we fully understand all the physical processes in the brain, it still doesn’t explain why we have conscious experiences. It’s the puzzle of how physical stuff (like our brains or neurons or nervous system) gives rise to subjective experiences.

As a side – you might be familiar with the ‘what is it like to be a bat’ ? essay and thought experiment – even if we could know everything about bat physiology and behavior, we still couldn’t truly know what it’s like to experience the world as a bat.

Embodied Interaction with the world 

Murray, and many other prominent AI researchers, often emphasize this idea that human intelligence is very different than ‘Artificial Intelligence’, largely because it lacks the same direct experience and interaction we humans have with the world. 

Google’s Deep Mind and Alpha Go

DeepMind is an AI research company, and AlphaGo is its program that beat human champions in the game of Go.

The Big Ideas

  1. Conversations with Claude 🧠

Had a 43,000-word conversation with Claude about the nature of reality and consciousness.

  1. Skepticism about AI Consciousness 🤖

Expresses skepticism when AI systems use terms related to consciousness, but finds the philosophical exploration interesting.

  1. Skepticism about Consciousness as a Concept Generally 🧩

Views consciousness as a concept we invent to describe the world, discussing how we apply it differently to AI like AlphaGo versus animals

  1. Embodied Interaction and Cognition 🏃‍♂️

Explains why artificial cognition is difficult to recreate: our embodied interaction with the world allows us to understand causal microstructure (foundational common sense). 

  1. Insightful Clever Reference to Ex Machina 🎥

Praises the clever lines in the film Ex Machina about testing AI consciousness, finding them insightful when first reading the script.

Why is this important? 

As we build and experience AI with increasing levels of ‘intelligence’ we are likely to reveal more and more about the nature of consciousness.

In attempting (and failing?) to answer the question – ‘is AI conscious?’ we might reveal something more interesting about the true nature of subjective experience.

What might AI teach us about consciousness?

Some related references.

‘Large Language Models and The Reverse Turing Test’ paper 

Is a recent’ish paper written by Terrence J Sejnowski. 

The paper suggests that LLMs may be mirroring the intelligence and expectations of the humans interacting with them, rather than demonstrating human-like intelligence or consciousness; causing us to overlook some truly novel and potentially emergent capabilities beyond the confines of human like intelligence and consciousness.

Cognitive scientist, Donald Hoffman – famous for his (rather convincing) ‘reality is an illusion’ pitch, suggests that consciousness may be fundamental. 

Neuroscientist, Anil Seth – suggest we are already living in a world where we will be unable to resist feeling as if AI is conscious.