WTF is a Large Language Model?

Resources used to create the essay:

WTF is a Large Language Model?

David Silver doesn’t make much noise.

He’s not on Twitter, he rarely does podcasts, and he never seems to engage in the heated debates around the more controversial aspects of Artificial Intelligence.

Yet, he has made some of the most significant contributions to the field.

A lifelong fascination with intelligence and gaming, and studies at Cambridge led him down the path of ‘Reinforcement Learning’.

He would eventually lead the DeepMind team building ‘AlphaGo’.

‘AlphaGo’ was different to previous AI systems like ‘DeepBlue’ – the IBM computer program that defeatured world Chess Champion, Garry Kasparov.

Different, because mastering the game of Go required a great deal of what we humans might consider ‘intuition’.

Instead of learning through the analysis of millions of past games (as previous models had done), ‘AlphaGo’ was trained using “Deep Reinforcement Learning” – a more flexible and intuitive structure, where the system learns, more or less, by itself, through a ‘trial and error’ approach, rather than through rigid, specified rules and instructions.

In 2016, AlphaGo defeated the world champion Go player, Lee Sedol.

Since 2016, DeepMind has continued pushing the boundaries of Artificial Intelligence and Reinforcement Learning, with the development of AlphaFold, and now MuZero.

MuZero was built in a way similar to AlphaGo, but with some additional pizzazz – by predicting the future reward, action, and modelling its environment, it is able to ‘discover’ the rules of the game, on its own, without specific instruction. 

While AlphaGo, AlphaFold and MuZero attracted some mainstream attention, AI still felt like an unimportant, superstitious and distant future fantasy. 

That was until the launch of GPT-3 a few years ago.

As discussed in the previous essay – in 2020, OpenAI’s GPT-3 took the world by storm..

It did so not because of the big leap forward in data, processing and compute (though these things were essential). The real surprise factor, was that the new models were .. incredibly (and terrifyingly) human-like.

They could convincingly simulate and engage in human conversation in a way only previously possible with another biological monkey meat-sack.

WTF is a Large Language Model

No intentions of explaining the full technical intricacies of Large Language Models in this essay.

For that, I’d recommend checking out:

In this essay, I want to offer just enough to make certain, beyond any doubt, that what is currently happening is going to fundamentally change human existence, forever.

The technology is new, it’s confusing, and perhaps a tad scary.

Because of this, most people are unable to grasp the significance of what these current Language Models represent, or the rate and force with which changes are coming.

Here’s an example – a recent NY Times piece suggesting that the hype surrounding Artificial Intelligence is overblown. 

.. A.I. is not even close to living up to its hype. In my eyes, it’s looking less like an all-powerful being and more like a bad intern whose work is so unreliable that it’s often easier to do the task yourself.

In other ‘major’ news, Scarlett Johansson is suing OpenAI for allegedly using her voice.

Both of these examples are so insignificant in the grand scheme of things.

Understanding (1) How LLM’s came to be (2) how they work, and (3) what we don’t know about how they work, is helpful. 

Once you have this information, make your own opinion about the trajectory of Artificial Intelligence.

How LLM’s Came to Be – The Alien Brain Continuum

ChatGPT and other new Large Language Models like Claude and LLaMA, are not some standalone, out-of-the-box, new invention.

They are the latest incarnation of an ‘Alien Brain’ that we’ve been developing for the past seventy years, along a messy and non-linear continuum.

Frank Rosenblatt, a psychologist and computer scientist, was tinkering with the Alien Brain way back in the 1950’s. He invented a ‘perceptron machine’ capable of recognising patterns and performing simple tasks such as distinguishing shapes and letters.

These early ‘Neural Networks’ were simple, inspired by how we thought biological brains worked, particularly how neurons fire and connect.

At this point, you might be asking – da fuck is a Neural Network?

In the earliest incarnation, these Neural Nets were both a theoretical concept and an actual physical machine – a bunch of visible electronic cells, with knobs, and wiring, and switches and connections, capable of dealin

The key consideration here, is that even the most simple neural nets, like Rosenblatt’s perceptron machine, were early incarnations of an alien brain.

Rather than take specific instructions – like say, a calculator, early neural nets (and the connections) learned and adjusted themselves through example.

As things progressed, these Neural Nets became more complex – multiple layers, feed-forward networks and backpropagation.

In simple terms, new improvements and concepts helping information flow through the network more efficiently.

These were the foundations of Deep Learning.

More unlocks in the form of Recurrent Neural Networks (RNN’s) and Autoeconders made it possible to process sequential information and maintain context over time.

By the time David Silver and DeepMind picked up the baton, Deep Neural Networks and Reinforcement Learning had been through half a century of innovation.

Combining the existing technologies and innovations, they created the first Alien Brain capable of defeating humans in a previously unthinkable demonstration of creativity and intelligence.

A next major step forward was the inception of Transformers and Large Language Models.

While AlphaGo was specifically created to play the board game Go through reinforcement learning and self-play, Large Language Models were created to understand and generate human language (NLP).

The 2017 “Attention is All You Need” paper introduced this idea that you could design a neural net using ‘Self-Attention’ to process and produce significant amounts of text input and output more efficiently.

https://arxiv.org/abs/1706.03762

OpenAI had the conviction to back the Transformer idea. 

In hindsight, it now seems obvious. But taking that gamble was a big one, and who knows where we’d be had the Open AI team not made that bet.

OpanAI with their ChatGPT had now given the alien brain the ability to efficiently create and understand human language.

How LLM’s actually work

Rosenblatt’s perceptron machine and Silver’s AlphaGo were powered by brain-like neural networks with many adjustable parameters, allowing them to learn and alter behaviour.

LLMs like ChatGPT are also powered by advanced neural network architectures, with billions of parameters.

But there are some key differences in design.

LLMs like ChatGPT are ‘pre-trained’ on vast amounts of text data.

Imagine reading every book, article, and website on the internet. That’s essentially what these models do. 

In phase one, they ingest huge amounts of internet data. Thousands of powerful computers (GPU’s) process the data and compress information into parameters. 

In the second phase, the model is trained on smaller, but higher quality data to refine behaviour. 

The enormous volume of date and refinement, allows the model to understand patterns, context, and nuances of human language.

The ‘Transformer’ design helps the Alien Brain process information more efficiently, progressively improving the mapping of context and meaning between words.

When words and sentences are fed into the Alien Brain, they are broken into tokens (words, numbers, symbols etc) and mapped against the existing library of knowledge. Each token is associated with an adjustable vector – which you can think of as coordinates in some very high-dimensional space.

The models predict the next word in a sentence by applying a mathematical probability to every possible next word.

While producing the next word in a sentence seems simple enough, it’s important to remember that the ‘next word prediction’ is actually the product of trillions of reference points.

The big surprise, is that by arriving at the next word, the models have clearly understood context and meaning.

Some people think this understanding is just an impressive ‘mirage’, produced through vast exposure to huge amounts of text and relevant references.

Others believe the understanding develops naturally as the model synthesises information and forms its own understanding of human-ness.

You can read more about the ‘Emergence Debate’ in the previous essay.

I think it’s less important than the fact we’re having the conversation at all.

Almost nobody predicted this would be possible.

What we don’t know (the black box) and interpretability

If we think back to AlphaGo, Silver and the Deep Mind team realised that learning to play GO would require more than just feeding the Alien Brain vast replays of GO games.

They applied reinforcement learning, giving the model feedback on whether moves were correct or incorrect. This feedback allowed the model to adjust its own internal switches, gradually improving its ability to make the correct move.

During the match against the World champ, AlphaGo made a dumb move – at least everyone thought it was dumb at the time. That move is now known as move 37, everyone thought was a mistake, until everyone realised that it wasn’t a mistake. It turned out to be a super unconventional and creative strategy that no human had ever thought to use.

While the team was able to unpack the move and why it was made, pinpointing how or why the exact interactions between individual switches (weights) were changed is virtually impossible, because of the ginormous number of connections.

AlphaGo had millions of parameters, the human brain has approximately 86 billion neurons.

The most recent versions of ChatGPT are rumoured to have nearly 2 trillion parameters (rumoured).

To put this into perspective – a library with a trillion books would circle the earth’s circumference a hundred times, taking you 30,000 years to read. No food, no toilet breaks.

The sheer size of these models is hard to comprehend. 

It explains why the models are so powerful, why they’re able (or at least appear able) to create a model of the world and understand meaning in human language, and also why it’s impossible for us to know exactly ‘how’ the models are predicting the next word in a sequence.

In a similar way, we don’t really know exactly how the neurons in our brains fire, communicate, and give rise to complex behaviours.

On Wednesday, Anthropic dropped their most recent work ‘Mapping the Mind of a Large Language Model’.

The research involved mapping the patterns of millions of human-like concepts in the neural networks of Claude 3.0 Sonnet – a model similar to recent releases of ChatGPT.

They were able to find repeatable patterns, (or in brain-speak, similar neurons firing) when discussing related concepts.

“looking near a feature related to the concept of “inner conflict”, we find features related to relationship breakups, conflicting allegiances, logical inconsistencies, as well as the phrase “catch-22”.

The reality, is that this is by far the best work we have in terms of understanding the inner workings of AI models, and it really doesn’t give us all that much in terms of understanding the inner workings of an AI model.

At this point, we need to accept the possibility that the intelligence exhibited by these models won’t be reducible to explainable parts, similar to the way the humans, and many other biological systems work.

I wrote another piece a few weeks ago discussing the idea that we’ll soon be forced to expand our understanding of intelligene. 

A final note, tldr

If the technical stuff hurts your brain, here’s the TLDR;

Artificial Intelligence is not a passing fad.

The rate of improvement is exponential. 

This is an alien brain intelligence, which, at this point, is impossible for us to fully unpack.

The change this is going to bring is unimaginable.