The AI Emergence Debate (is not) for Dummies

Some of the resources used to create the following essay.

The AI Emergence Debate (is not) for Dummies

This following story is taken directly from Terrence J. Sejnowski’s Large Language models and the reverse turing test paper.

One of my favorite stories is about a chance encounter on the backroads of rural America when a curious driver came upon a sign: “TALKING DOG FOR SALE.” The owner took him to the backyard and left him with an old Border Collie. The dog looked up and said:

“Woof. Woof. Hi, I’m Carl, pleased to meet you.”

The driver was stunned. “Where did you learn how to talk?”

“Language school,” said Carl, “I was in a top secret language program with the CIA. They taught me three languages: How can I help you? как я могу вам помочь? 我怎么帮你?”

“That’s incredible,” said the driver, “What was your job with the CIA?”

“I was a field operative and the CIA flew me around the world. I sat in a corner and eavesdropped on conversations between foreign agents and diplomats, who never suspected I could understand what they were saying, and reported back to the CIA what I overheard.

“You were a spy for the CIA?” said the driver, increasingly astonished.

“When I retired, I received the Distinguished Intelligence Cross, the highest honor awarded by the CIA, and honorary citizenship for extraordinary services rendered to my country.”

The driver was a little shaken by this encounter and asked the owner how much he wanted for the dog.

“You can have the dog for $10.”

“I can’t believe you are asking so little for such an amazing dog.”

“Did you really believe all that bullshit about the CIA? Carl never left the farm.”

AI Learns to talk (like a human)

In 2020, OpenAI’s GPT-3 took the world by storm.

How?

From a data, processing and computing perspective, the model was way larger in scale than previous versions, but this was just the technology that enabled the real innovation and reason for such widespread attention.

Unprecedented human-ness.

It could convincingly simulate and engage in human conversation.

I distinctly remember sitting in a cafe and having a deeply personal conversation with GPT-3 about life, death, and all the existential fears between.

A conversation that up until that point, was only possible with another biological human meat sack.

Since the launch of GPT-3 and many other more powerful and impressive models, an absolute goat rodeo of debate contesting the nature of this new ‘intelligence’ has emerged.

Researchers and founders from the fledging companies, the AI and machine learning OG’s, philosophers, linguists, biologists and ‘experts’ from every other imaginable field have weighed in on the debate.

Here are some of the questions you’ll hear thrown about.

  • Are the abilities of large language models emergent?
  • Can the current models reason?
  • Are the signs of intelligence genuine, or a mirage?
  • Do current models really understand human language?
  • Do they understand meaning and what they are saying?
  • Do current LLM’s exhibit genuine intelligence comparable to humans?
  • Can AI systems generalise beyond their training data?
  • Is AI sentient and suffering? 

Here are some of the interesting papers and discussions contesting the debate:

The Debate over ‘Understanding in AI’s LLM’s by Melanie Mitchell

Are Emergent Abilities of Large Language Models a Mirage? By Rylan Shaeffer

Predictability and surprise in Large generative model by Jason Wei

And here’s another piece from just yesterday by Dr Fei Lei featured in Time

The Emergence Debate

The Emergence Debate can be confusing; at least it was for me.

Some of this research examines specifically whether ‘emergent abilities’ are, or are not, predictable, as the model scales up in size.

Other research examines whether the models display emergent characteristics at all.

And judging by some of the comments and related Reddit posts, some people have accidentally conflated the two.

Regardless, the reality, is that these questions are really, really hard to answer.

Because – (1) words like ‘emergence’ and ‘intelligence’, or ‘understand’ and ‘reason’ are slippery. They are slippery words we apply to complex phenomenon, that we really just don’t understand. And because (2), these complex phenomenon often can’t (and shouldn’t) be applied to complex things as if they were binary categories.

For this, I think evolutionary biologist Michael Levin is best fit to articulate.

In an ant colony, each ant follows simple rules like following trails, smells and avoiding obstacles. Together, these simple actions lead to complex behaviors, like finding food efficiently and building nests. 

In bird flocks, each bird follows basic rules of alignment, separation, and cohesion, which leads to the emergence of intricate and coordinated flight patterns without any central control.

Minor temperature changes in water can lead to drastic change from liquid to solid.

Individual buying and selling of goods and services creates complex economic cycles.

Cellular automata and Conway’s game of life shows emergence and complexity in the most simple computational systems.

And this type of emergent behaviour seems to carry right through to neural networks and machine learning.

Oh, the dreaded laundry

When I ask the most recent version of ChatGPT to ‘write me a humorous poem about a mundane task, like doing the laundry’, this is the result.

The creation of this poem demonstrates ability beyond simple creativity, rhyme and poetic structure.

It mightn’t be immediately apparent, but the fact an ‘artificial intelligence’ can almost instantly create a poem, demonstrating a deep understanding of humour, satire, emotion, irony, and the human struggle, is fucking unbelievable.

Does the ability to create a coherent and emotionally resonant poem arise naturally as the model synthesises information and forms its own understanding of human qualities?

Or is the apparent human-like creativity just an impressive ‘mirage’, produced through vast exposure to huge amounts of text and relevant references?

If you met a talking dog, capable of telling you a captivating story about being a spy in the CIA, which question are you most likely asking first?

Was this dog really in the CIA?, or – 

How the fuck did this dog learn to speak?

(after first asking what had I eaten? ** thanks Tim)

Deciphering whether the ability to compose the Laundry Poem was truly emergent or explainably designed is important.

It will help guide how these models are trained and improved moving forward, and (more importantly in the short term), it will inform the AI safety debate. If the behaviours demonstrated by newer models are unpredictable, there are some obvious concerns.

For what it’s worth, my guess is that the answer is probably somewhere in between. 

Somewhere on the diverse, non-linear spectrum between natural emergence and intelligent design.

And rather than deciding a clear winner, we’re likely going to need to revisit our understanding of some of these binary categories, like intelligence and emergence.

All of this is far less interesting than the fact that an “Artificial” intelligence can write a deeply human poem. 

The fact that we are having these debates at all is so bizarre.

So, what is happening?

Here’s another quote from Sejnowski’s reverse turing test paper.

“Something is beginning to happen that was not expected even a few years ago. A threshold was reached, as if a space alien suddenly appeared that could communicate with us in an eerily human way. Only one thing is clear – LLMs are not human. But they are superhuman in their ability to extract information from the world’s database of text. Some aspects of their behavior appear to be intelligent, but if it’s not human intelligence, what is the nature of their intelligence?”

This idea that a neural network can learn the meaning and context behind words is not a new one.

Even the most recent research unpacking current transformer architecture shows that the goal of a transformer, is literally to enable the progressive improvement of mapping of context and meaning between words.

We’ll save the nitty gritty details of current LLMs and ‘Transformer Architecture’ for a future essay.

But, in very simple terms, current models like GPT predict the next word in a sentence by applying a mathematical probability to every possible next word.

When words and sentences are fed into the machine, they are broken into tokens (words, numbers, symbols etc). Each token is associated with an adjustable vector – which you can think of as coordinates in some very high-dimensional space. We operate in limited three-dimensional space, so imagine a vast multi-dimensional map of many parallel universes.  

Let’s say you ask ChatGPT:

“Tell me a funny story about a talking banana who wants to be a rockstar but takes too many drugs, gets way cooked, and dies while performing at a festival.”

The model breaks down the request into tokens: “tell,” “me,” “a,” “funny,” “story,” “about,” “talking,” “banana,” ….  These tokens are mapped to that higher dimensional vector space.

The vectors (coordinates) share information back and forth through billions of nueron-like, adjustable connections established during training. This process allows the model to learn intricate details, capturing both contextual and semantic meaning between words.

The pre-training process combined with the new coordinates in the vector space, allow the model to, not only understand sentence structure, syntax and grammar, but (somehow) also, moral lessons, absurdity and goal orientation.

“Talking” and “banana” are grouped together to form a humorous protagonist in a story involving moral lessons, absurdity, goal orientation and the tragic cautionary tale of rockstars on the bender.

“Once there was a talking banana named Barry who dreamed of being a rockstar. Barry strummed his peel-guitar and sang fruity tunes. Fame got to his head, and he started taking too many drugs. One day, Barry was so high he thought he could fly. He tried to stage-dive at a concert but ended up slipping on his own peel.”

We’ve glossed over 99% of the technical details here, but you get the gist.

There isn’t some pre-programmed instruction manual on how the model should produce an output and it’s not simply regurgitating information from training. 

There is a very flexible structure, with billions of adjustable parameters, allowing the model to basically teach itself how to come up with an accurate response.

Expanding our understanding of Intelligence & Interpretability

In the last section, I used the word ‘understand’, though rather hesitantly.

We could replace ‘understand’ with ‘compute’, but I don’t think it really matters all that much at this point.

Does the model truly ‘understand’?

Are the behaviours emergent?

Are they intelligent?

Well, probably not in a way that is identical to a human.

But it’s certainly doing some form of understanding, displaying emergent behaviours and demonstrating intelligence, in many cases, better than a human could. 

And increasingly, it’s seeming more and more likely that these models will be able to generalise across multiple fields.

Whether or not these models will achieve a human-level general intelligence without direct contact with the biological world is an interesting question. 

We don’t fully understand how we make sense of the world, how complex behaviours emerge from simple biological systems or how intelligence works.

All of which leaves us with more important and profound questions like ‘what is intelligence?’ ‘what is emergence?’ and ‘what can we learn about ourselves through studying these systems?

There’s a tonne of interesting research happening.

On Wednesday, Anthopic dropped their most recent work ‘Mapping the Mind of a Large Language Model’.

The research reveals how millions of human-like concepts can be mapped as patterns in the neural networks of Large Language Models.

I’ll unpack the report in another essay.

For the meantime, I’ll go on record stating I believe that by unravelling the inner-workings of Artificial Intelligence, we’ll uncover some incredible and confronting truths about the nature of intelligence and emergence.