AI will probably steal your job sooner than you think, and why it mightn’t matter (Algorithmic Burger Flipping)

Resources used to create this essay:

Ned Ludd was a skilled weaver from Nottingham, England.

He spent long hours weaving intricate patterns and creating beautiful fabrics, taking great pride in his work.

One fateful day, early 1800’s, at Ned’s countryside factory, a new mechanical loom was installed.

The factory owner boasted the machine could weave faster and more efficiently than a hundred human workers combined.

Ned knew that the machine threatened not only his livelihood, but the very essence of his craft.

In a fit of rage, he grabbed a sledgehammer and marched toward the mechanical beast. With several mighty swings, he smashed the hammer through the gears of the loom until it was no more than scrap metal.

Nobody knows if Ned actually existed.

Regardless, he was the mythical face of the ‘Luddite movement’.

While Ned is a myth, ‘the Luddites’ were not. 

They were a group of textile workers who protested against the introduction of mechanized looms and knitting frames. They resorted to destroying the new machinery in an attempt to protect their jobs.

The British government eventually suppressed the movement, and mechanization continued to spread, significantly altering the textile industry.

“Luddite” is now a blanket term we use to describe people, usually old people, who hate new technologies.

There’s an argument to be made here that Tucker, the Luddites, even the Amish, are all justified in their concerns over technological progress.

Algorithmic burger flipping

Almost everyone I know spends most of their time and energy, in some capacity, worrying about a ‘job’ or an imaginary bank balance.

Some openly admit to hating it, some pretend to enjoy it, some tolerate it as a means to an end, some have event converted their true passion into a pay cheque.

Regardless, the reality is that we have a very strange relationship with jobs, with money and with meaning.

So it’s natural, that when people hear about the growing influence of Artificial Intelligence, their initial thoughts drift straight to ‘well how is this going to impact me, my job, my work?’

If we wind back the clock just five or so years, AI job displacement fear was directed toward mechanical work.

Truck drivers, cashiers, customer service reps, fast food workers.

The proverbial ‘burger-flipper’ served as the metaphor for the simplest of low-skill tasks.

But this story was banking on an idea that the initial development of Artificial Intelligence would come in the form of robotics or some integration into the physical world.

Like a mechanical burger flipper.

Then, a few years ago, the story took an interesting turn.

With rapid and unexpected improvements in NLP (Natural Language Processing), models like ChatGPT were suddenly able to complete tasks we’d previously have considered ‘higher cognition’, even impossible by an AI system.

Content creation, data analysis, even medical diagnosis.

There’s still some debate around which careers will be first to the chopping block, and how we monkeys will play a role, but it’s clear we’re in the early stages of a major shift in work and job displacement.

In the future, a human won’t be flipping your quarter pounder patty at MickeyD’s.

In the future, you’ll receive very little (if any) medical diagnosis from a human doctor.

The point of interest is no longer ‘if’ it’s possible, it’s the ‘when’ and ‘how’.

To be blunt – it’s possible that the ‘when’ comes way sooner than most people are pricing in.

If it comes way sooner, shit is going to get very weird and very uncomfortable – and there will be challenges far bigger than losing your job.

Increasingly efficient (narrow) Artificial Intelligence, ie something that can flip a burger or detect a tumor, might take a few jobs, but it’s not going to turn the world upside down.

Flippy, for example, is an AI powered robotic arm designed for kitchen automation. With machine learning, computer vision and robotic automation, Flippy now works alongside fast-food coworkers at Caliburger.

This is an example of narrow AI, which most people accept is well on the way.

But things start getting wildly interesting as we approach increasingly general and superhuman level intelligence.

This is a little harder to conceptualise.

As we approach AGI and ASI, not only do most people lose their jobs, but the entire economy enters a period of forced restructure. 

In this (hypothetical) world, there is no longer fast food.

Every household produces its own 3D printed synthetic food, catering to unique health requirements and indicators, and continually monitored genetic blueprints.

Not that long ago, this reality was lifetimes away, then decades, then years.

The timeline is rapidly shrinking.

The problem here is not the death of Kernel Sanders.

It’s that we’re not prepared to make this transition.

And that on the way to healthy home printed food, we unlock some other not so good technological possibilities. Think military funded mosquito drone swarms or bioweapons tailored to target specific genetic profiles.

A select number of people working at the leading AI labs, most of who exist in a small San Francisco bubble, are very aware of the rapid trajectory of Artificial Intelligence.

An even smaller minority of them are starting to publicly share the broader implications and concerns of this progress.

The AI Arms Race & Situational Awareness

Leopold Aschenbrenner is a very interesting 23 year old.

He graduated from Columbia University at 19, published award winning papers on economic theory and existential risk – catching the attention of prolific economists, writers, thinkers like Tyler Cowen, who encouraged him to pursue the non-traditional academic path.

This led Leopold to a recent hiring, and then firing, from OpenAI for reportedly ‘leaking sensitive information’.

From Leopold’s side, the ‘sensitive information’ in question was a Google Doc he’d created to share some high level concerns about the future and security of AGI.

Without getting too caught up in the politics, there is some suggestion that Open AI’s shift in focus towards rapid growth and deployment have come at the expense of safety.

Last week, Leopold published ‘Situational Awareness’.

The series unpacks and explains that we are on a very steep trajectory toward the creation of a ‘Super Intelligence’, and that very few people have recognised or are prepared for what might unfold.

I’ve listened to his conversation on the Dwarkesh podcast, and am now working my way through the essay series.

This is from the introduction.

The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.

..

Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them.

In the series, and in the conversation with Dwarkesh, he outlines a very convincing trajectory for how and why things will potentially progress much faster over the coming years than people expect.

In the next essay, I’m going to share and simplify a few of the big insights from the paper and conversation to help people make sense of the magnitude of the situation.

We’ll cover:

  • Scaling, OOM’s, Algorithmic Progress and The Trillion Dollar Cluster
  • Unhobblings, Agents and The Intelligence Explosion
  • Geopolitical Tensions, National Security and Fog of War (the AI Arms Race)

While there’s certainly some bottlenecks and debate, there is a tsunami powered tail-wind up the arse-end of progress toward AGI and Super Intelligence.

Meaning – there’s a good chance things are going to get weird, way sooner than most are expecting.