AI will probably steal your job sooner than you think, and why it mightn’t matter (P2. the path to Super Intelligence)

Resources used to create this essay:

In the last essay, we introduced Leopold Aschenbrenner – a very interesting 23 year old Columbia graduate who was recently hired and then fired from OpenAI.

Leopold’s ‘Situational Awareness’ series unpacks some potential paths and timelines toward ‘General Intelligence’ and ‘SuperIntelligence’.

I wanted to share and explain, in simple terms, a few of Leopold’s big insights and predictions. 

Firstly, to highlight the magnitude of the situation the world is in right now. 

And secondly, to help make sense of how a changing world might impact you, and your life.

I’ll preface this essay by sharing three things.

  1. There is a serious lack of understanding into what Artificial Intelligence is 

Many people think that Artificial Intelligence is just a ‘new technology’ – it is fundamentally different. 

ChatGPT and other new Large Language Models like Claude and LLaMA, are not some standalone, out-of-the-box, new technology. They are the latest incarnation of an ‘alien intelligence’ that has been developing for the past seventy years. 

We are talking about an alien intelligence that is capable of ‘learning and improving by itself’, learning models of the world and the human experience and interacting in

Designing these systems is much more like watering a plant than it is building a robot.

I shared an essay a week or so ago explaining this:
https://theplebcheck.com/wtf-is-a-large-language-model/

  1. Binary definitions/concepts are not helpful

Binary definitions and static milestones – e.g. ‘when will we have AGI?’ are limited. It’s more helpful to think of progress existing on a spectrum.

Current Artificial Intelligence is toward the ‘narrow’ end of the spectrum. It’s designed to perform very specific tasks – like recognising faces in photos, translating languages or recommending movies through next word prediction.

Artificial General Intelligence is the ability to understand, learn and apply knowledge across a wide range of tasks, in ways similar to a human. We are already seeing and experiencing the early stages of General Intelligence. AI game models like MuZero display increasingly general intelligence. The newest and biggest Language Models and Transformer architectures, like ChatGPT and Claude have clearly developed an ability to map meaning, context and make sense of the world through human language.

Super Intelligence would be much smarter than human intelligence and capability in every domain. For this reason, it’s incredibly hard for us to conceptualize how that might look.

Regardless of whether or not you believe that we’ll see a truly human-like general intelligence, it’s becoming increasingly clear that we’re hurtling toward the Super end of the intelligence spectrum.

  1. There are many possible ways this plays out

It’s possible that the progress of AI is underwhelming. That we realise a human-like general intelligence is not something we can replicate.

It’s possible that we don’t see the required compute, algorithmic improvements, data, capital or energy to create a General or Super Intelligence within the decade.

It’s also possible regulation, fear or conflict will slow innovation.

But on the current trajectory, we’re going to have some form of alien intelligence that will change the world beyond comprehension within the decade (maybe sooner).

At this point, the forces pushing us toward the creation of an Artificial General and Super Intelligence seem far stronger than those opposing.

Leopold’s argument for rapid progress toward Super Intelligence

Leopold’s argument, in a nutshell, is that most of the world is severely underrating the possibility that we move toward the Super Intelligence end of the spectrum within the decade. 

If his predictions are right, we are going to start experiencing the forces of this progress within a matter of years, not decades.

Not just job displacement. 

National Security Intervention, WMD’s (weapons of mass destruction), Geopolitical Tensions, and a complete restructuring of global economies.

We’ll be locked in a race to Super Intelligence. 

The thing I’ve found most compelling about Leopold’s work, is that his predictions for how things might play out aren’t just theoretical assumptions; they’re predictions based on (a) a deep understanding of the technology and how it’s progressing from within the labs, and (b) a higher level understanding of the economic and geopolitical forces that might push progress toward Super Intelligence.  

Here are a few of the big insights and predictions.

I’ve simplified them to make them easier to understand. 

Apologies in advance for any misrepresentations of the original work. I’d recommend you read and watch in your own time.

  1. OOM’s, Unhobblings and The Trillion Dollar Cluster

For these systems to become ‘more intelligent’, we need to (1) increase the raw ‘compute power’ (more power and processors to train and run the models) and (2) develop and design better methods and techniques for the systems to process data and learn more efficiently.

The scaling laws can get very confusing.

All you need to know, is that these two combined improvements (compute increase and algorithmic improvements) have resulted in roughly a full OOM – Order of Magnitude (or 10x) improvement each year for the past five to ten years.   

To help conceptualise this, imagine reading one full text book on a subject this year.

Next year, you read ten textbooks AND your ability to understand and conceptualise the ideas improves 10x.

By year 10, you are effectively reading and comprehending the equivalent of 10 billion textbooks in a single year – with an ability to understand and synthesize information beyond anything a human could ever imagine.

This is an overly-simplified example of exponential growth, and AI will have many bottlenecks, but you get the gist.

The simple fact that the scaling laws have held true so far means we are on track to create a ‘Trillion Dollar Cluster’ (advanced computing infrastructure to support and develop artificial intelligence) within the next five to ten years.

This is a colossal amount of money and energy.

More than the GDP and energy consumption of a mid sized country. 

In addition to the raw compute and algorithmic improvements, there is the potential for further adjustments and discoveries that can ‘unlock’ capabilities in existing AI models.

For example, RLHF was a huge unlock/unhobbling – where adding a small amount of human feedback to the training process drastically improved accuracy and relevance.

The improvement trajectory is already looking insanely steep without any additional major unlocks.

  1. The Automated AI Researcher, AGI and The Intelligence Explosion

Leopold Timeline video

Leopold predicts that we will move further along the intelligence spectrum long before we hit the trillion dollar cluster.

As early as next year, we’ll have ‘drop-in agents’ with increasingly general intelligence.

Ie, you’ll be taking calls and delegating tasks to coworkers who aren’t human, if you’re lucky. 

Maybe you’ll be delegated tasks by managers who aren’t human.

(Note that I think it’s pretty likely we’ll only need a ~$100B cluster, or less, for AGI. The $1T cluster might be what we’ll train and run superintelligence on, or what we’ll use for AGI if AGI is harder than expected. In any case, in a post-AGI world, having the most compute will probably still really matter.) 

And this is where things start getting scary.

Currently, the 10x yearly improvements we’re seeing are half the contribution of human AI researchers enhancing the algorithms.

If we have agents with increasingly general intelligence, we’ll also have automated AI researchers – millions of AI researchers without the constraints of a biological human meat sack – no lunch breaks, no holidays, no workspace safety meetings. 

If algorithmic progress is already on this steep trajectory with hundreds of mere human AI researchers, what will happen when we have millions of increasingly intelligent AI researchers? 

This is the ‘intelligence explosion’.

There’s an argument to be made that once you have the automated AI researcher, you solve robotics and then things start moving real fast.

  1. Geopolitical Tensions, National Security and The AI Arms Race

Our current global world order is (precariously) glued together by the reality that the US has the biggest guns. 

If you zoom out far enough, history is one big repeating loop of empires rising to power through military dominance before crashing down.   

With or without the intelligence explosion, leading the race to Super Intelligence will literally mean global dominance and world order for the next century.

It feels inevitable that National Security will at some point, make the development of Artificial Intelligence their number one priority.

Leopold shares (again, rather convincingly) how unlikely it is that Government and National Security (and Lockheed Martin etc) won’t be intimately involved with the companies creating Super Intelligence.

If growth continues, we’ll likely see a centuries worth of military and WMD (weapons of mass destruction) progress in less than a decade. 

He predicts the best case scenario – one in which the US has a leading edge on AI development, and the worst an all out arms race with China.

The situation is eerily familiar to that of The Manhattan Project – the US Government’s research project during WWII to develop the first nuclear weapons.

Closing thoughts

I really hope things don’t play out like this.

I hope progress slows and we have time to adjust accordingly, both as individuals within local communities and as a globally connected human civilisation living with new forms of intelligence.

I hope we get to experience all the magical potentials of advanced intelligence.

But I’m also a realist.

I’ll be monitoring the rate of progress toward increasingly General Intelligence.