Are we about to solve the mysteries of memory with AI?

References used to create this essay:

“Once upon a time, I dreamt I was a butterfly, fluttering hither and thither, to all intents and purposes a butterfly. I was conscious only of my happiness as a butterfly, unaware that I was myself. Soon I awaked, and there I was, veritably myself again. Now I do not know whether I was then a man dreaming I was a butterfly, or whether I am now a butterfly, dreaming I am a man.”

Ancient Chinese Daoist philosopher – Zhuangzi (“Jwong-dzuh.”)

Sometimes we forget just how bizarre and magical life is.

During ‘metamorphosis’, a caterpillar’s body breaks down into a soupy-substance before magically rearranging itself into a butterfly. Complete with legs and wings.

Wild.

Despite this dramatic transformation, research suggests that butterflies actually retain some of their caterpillar memories.

In one experiment, researchers trained a caterpillar to associate a specific smell with a mild shock or reward. Following metamorphosis, researchers found that the butterflies had the same response to the smell.

So where in the caterpillar soup was that memory stored? 

In some more recent experiments, researchers extracted RNA from a sea slug which had also been trained to respond to a mild shock. The RNA was injected into an untrained slug. After the injection, the untrained slug displayed the same defensive response to the shock.

We’ll revisit the implications of this research at the end of the essay.

The genie has left the bottle

The genie has left the bottle and is being fed crack cocaine by the shovel load . 

NVIDIA (the company responsible for ‘powering’ AI) just reported a $26 billion dollar quarter.

The CEO is signing boobs.

To put this into perspective, NVIDIA’s growth over the last three months outpaced the annual GDP of many small to medium-sized countries.

The only other companies in this same league of disgustingly enormous market cap – Microsoft, Apple, Google and Meta, they’ve all completely pivoted their business models to focus on the development and integration of Artificial Intelligence.

The core focus.

Though most people still don’t realise it, we will look back at this exact moment in history as the beginning of the AI arms race.

A concerning number of very intelligent people think this arms race, on its current trajectory, is going to either wipe the human species from the face of the planet. Or, if we’re lucky, transform the world so rapidly and profoundly, that it will be unrecognisable in just a few years.

If this sounds outlandish, it’s worth reading Leopold’s Situational Awareness series.

We’ll save that discussion for another essay – It feels as though some of my friends are ready to stage an intervention for my incessant ramblings on the baffling obliviousness to what’s coming.

So for this one, we will focus on some of the more interesting potentials possible, pending we don’t exterminate ourselves.

Artificial Intelligence and Memory Collide

While Twitter continues stirring the shit-pot of debate around AI sentience, emergence, super and general intelligence and Yan Leccun’s secret suppressed infatuation with Elon Musk, Demis Hassabis is ‘solving intelligence’.

He is the head of Google’s, Deep Mind.

As systems of Artificial Intelligence improve, and we come increasingly closer to solving intelligence, we are inevitably going to experience some mind-boggling breakthroughs.

For example, AI is already helping researchers identify Alzheimer’s – a disease relevant for today’s essay because it involves memory loss and cognitive decline.

As AI continues to improve, we are headed down a path of, not only better detection, but a better understanding of the complex mechanisms underlying diseases like Alzheimers.

And, dare I say, eventually the cure and prevention of disease and illness altogether.

Designing and Discovering, simultaneously

We’re simultaneously ‘designing’ and ‘discovering’.

The systems of Artificial Intelligence we have today, have been designed using what we know about human cognition (which is very little).

ChatGPT and other new Large Language Models like Claude and LLaMA, are not some standalone, out-of-the-box, new invention – they are the latest incarnation of an ‘Alien Brain’ that we’ve been summoning for the past seventy years.

That’s the design part.

The more interesting and misunderstood part, is that these systems, by their very design, don’t operate with specific instructions – they refine, improve and absorb context on their own.

I wrote another piece on LLM’s explaining this.

The gist, is that the training and development of current LLM’s is more like watering a plant than it is building a robot.

Because of this, they display unexpected behaviours (thus the Alien Brain), and from these ‘unexpected behaviours’, we are discovering and learning new things about human-ness, which might have taken us centuries to learn otherwise.

If the arms race doesn’t kill us, it’s going to hurtle us toward confrontation with some big philosophical questions.

It turns out we know very little about life and what it means to be human.

Memory – what we know and what we don’t 

One of the major features of human-ness, of which we know very little, is memory. 

What we do know about memory:

  • We know that memory has something to do with the encoding, storage and retrieval of information\
  • We know that (crudely speaking) we have both long and short-term memory features. reference
  • We know that we have both an experiencing self and a remembering self, and that we often draw happiness from the remembering self. What we really want is a good story, not a great experience, which is kinda sad. 
  • We know that although it ‘feels’ as though memory is a static recording of past events, it’s not. Memories are not static, but dynamically reinterpreted stories we construct and unpack. 

What we don’t know about memory:

  • We don’t know the exact mechanisms by which memories are stored in the brain. We know that synaptic changes (connections between neurons in your brain) play a role, but we really don’t understand how or where memory is encoded, stored or retrieved. That’s worth repeating – we really don’t understand how or where memory is encoded, stored or retrieved.
  • We don’t know the extent to which misinformation, past experiences or biases can alter memories.
  • And we don’t know how the brain is able to use highly efficient methods to store vast amounts of information in a relatively small space.

To add to this, the current incarnation of the latest Artificial Intelligence Alien Brain – Language Models like ChatGPT, have demonstrated the ability to understand and contextualise from memory. And we don’t fully understand how this is happening either.

As you can imagine, answering these questions, or even inching further along the gap of what we don’t know <to what we do know>, has a ludicrously high price tag.

If we knew the exact mechanics behind the storage and recall of memories in humans, or under the hood of these LLM Alien Brains – we’d be better positioned to engineer more efficient versions of Artificial Intelligence, create an ‘AGI’ and ‘solve intelligence’. 

And, we’d be closer to understanding human memory and curing memory related degenerative diseases like Alzheimers.

Reassessing our understanding of memory

Three recent pieces of work, from different fields of research, make me think we’re already well on the path to reassessing and understanding the true nature of memory (and intelligence).

In a recent conversation between Charan Raganath and Lex Fridman, there were countless bridges between biology land and AI land. Charan references the many parallels between human memory features like adaption and reconstruction of past memories, and the retaining and recalling of useful information in current AI systems.

In this Dwarkesh conversation with Trenton Bricken and Sholto Douglas (AI researchers from Anthropic and Google), one of my faourite parts was some hypothesising about memory and intelligence having a lot to do with pattern matching and recall.

In Michael Levin’s most recent ‘Self-Improvising Memory’ paper, he puts forth a wildly fluid argument for memory. He claims that it might be better understood as a dynamic process of reinterpreting and modifying compressed information, to extract meaning and maintain relevance in changing context and environments. A process less neuron/brain specific than we originally thought. 

My (unproven) hypothesis

My (unproven) hypothesis here, is that the millions, billions and trillions of dollars and megawatts we are pumping into AI will lead us to the creation of increasingly elegant architectures, with increasingly fluid and interconnected design inspired by nature. 

And in the process, we’ll come to learn that our own memory is better approached as fluid and adaptable.

Learning this, deeply, will have some profound implications on how we live our lives.

Go forth and make new memories, diversify your training data, you are the caterpillar soup.

I’ll leave you with the final parra from Levin’s paper

“I think the lesson to take from this is to embrace the dizzying freedom of breaking away from the goals and structures handed down to us from our evolutionary and personal past, and take on the responsibility of writing our own, improved somatic and mental patterns and values for the future. What engrams do you want to leave to your own future Self, and to humanity’s collective future? Despite knowing that they will not interpret them in the way you may envision now, it is still wonderous to imagine every act as a benevolent communication event to a future being.”

Michael Levin