Catastrophic AI Risk – Beyond Good and Evil

References: https://youtu.be/1NFuddEAi5s?si=-97sE_BAPWDNzoIw (The Wide Boundary Impacts of AI with Daniel Schmachtenberger | TGS 132)

Daniel Schmachtenberger is widely regarded one of the greatest ‘meta-thinkers’ of our time.

Meaning – he’s good at considering and connecting the dots, ‘big picture stuff’ – technology, biology, the economy, and making sense of the world.

A few months back, I wrote about Moloch and the Metacrisis.

Who is Moloch and What is Metacrisis?

Moloch was a mythical, terrifying pagan god, worshiped by various ancient civilisations.

According to biblical records, people would offer up sacrifices (ie burning children alive) to appease moloch. Very grim stuff.

Thanks to one of the sixties counterculture heroes Alan Ginsberg, and more recently writer Scott Alexander, Moloch has since been used as a symbol to represent the invisible force which pushes society to pursue growth with no considerations of the repercussions.

Undermining our ability to cooperate and coordinate.

Or, as Ginsberg put it:

A few years back, Schmachtenberger resurfaced the Moloch symbol, yet again, to illustrate the precarious situation the world is in. 

He combined Moloch and the Metacrisis. 

The Metacrisis is the interconnected and complex totality of risks that can no longer be solved by themselves.

Climate change, biodiversity loss, nuclear war, political instability, economic collapse, mental health.

All of these things present serious risk – but in isolation, all have ‘somewhat’ apparent solutions.

Unfortunately, we are now at the point where trying to solve one of these things will likely make other things much worse.

Moloch, the Metacrisis and Artificial Intelligence

At one end of the spectrum of possible outcomes, AI is infinitely good.  

We can cure disease, feed the hungry, remove political biases, the list goes on.

And at the other end, it basically compounds all the risks of the Meta Crisis – spiraling us into irreversible catastrophe.

I wish I could say Daniel is optimistic about the path we’re on.

Here are the big ideas from a very recent conversation between Daniel and Nate.

AI Nuance

AI good or AI bad is dumb, it’s too simplistic a view. Just because AI can do some good things doesn’t mean we should use it everywhere without thinking about the risks. We need to be careful, especially about risks that can’t be undone.

Arms Race Dynamics

Interesting take – the companies or countries leading in AI development could actually stop the race if they wanted to, but they don’t. They pretend they have to keep going because others might get ahead, but really, they just want to win.

The AI safety Paradox

Daniel is pointing out that even companies that start out trying to make AI safer often end up developing more powerful AI, which can actually increase risks.

AI as a ‘Savior Narrative’ 

In addition to the arms race happening between the companies and countries building towards AGI, there is a compelling narrative – a religious salvation or alien intervention. A way of avoiding the hard work of solving problems ourselves.