Used to create this essay.
Mo Gawdat on London Real.
Sam Altman on Lex Fridman.
Sam Harris on Diary of a CEO.
Andre Karpathy on Lex Fridman:
In the film “Don’t Look Up”, two astronomers discover a comet on a direct collision course with Earth.
They embark on a media tour to warn humanity of their impending doom, but are met with indifference and disbelief from the public and government leaders.
The movie suggests a number of potential reasons for the indifference – political agendas, media sensationalism, distractions and the challenges of daily life.
Replace comet with advanced systems of Artificial Intelligence, and you have yourself a real life re-enactment of the film.
It’s now very possible that within the decade (or sooner), we will be sharing our world with an alien life form orders of magnitude more intelligent than the human species.
The potential upsides and downsides of this encounter are incalculably good, and incalculably bad.
And the reality, is that society at large is completely underestimating what we’re in for.
What is Intelligence?
Intelligence is the ability to solve problems, to create and achieve goals, and to make decisions in response to a changing environment. To reason, to accumulate knowledge, to learn from failure and success and to interact with an ever-changing world.
It’s easy to forget just how magnificent and mysterious biological systems, like humans, really are.
A calculator could be considered a form of ‘artificial’ intelligence, programmed to do math.
The Instagram algorithm could be considered a more complex, more general form of ‘artificial’ intelligence, programmed to learn about your interests, hijack your nervous system, and keep you glued to your screen for prolonged periods of time.
Both of these examples are, relatively speaking, ‘narrow’ forms of ‘artificial’ intelligence.
They are good at doing math.
And if programmed to do so, can even hijack your dopamine reward system and keep you doom scrolling long enough to hate yourself.
But ask those same systems to take out the trash, or walk the dog? No bueno.
This is where the ‘General’ part comes in.
‘General intelligence’ is the ability to apply intelligence across many different situations.
It’s now becoming increasingly accepted that Artificial General Intelligence is not only likely, but coming way sooner than originally anticipated.
In addition to running complex math computation, creating algorithms, taking out the trash and walking the dog, AI systems will be able to write poetry, cure cancer and start global Cyber World War III.
And this is where things get interesting.
An Artificial General Intelligence will be capable of making decisions and performing tasks in a wide variety of domains, without human intervention.
It will be able to teach itself.
It will be able to continuously learn without the same restraints of a human, biological meat suit.
This means we will co-inhabit a world with another intelligence which will continue to grow, exponentially more intelligent than the human species.
Once this happens, well. We have no fucking idea.
What we can expect, is that the very nature of human existence will change, beyond anything we can possibly conceive.
Smart people on either side of the good versus bad debate have posed serious outcomes ranging from immortality to immediate and complete annihilation of the human species.
Worth repeating.
Serious outcomes ranging from immortality to immediate and complete annihilation of the human species.
And without going into too much detail, there is sufficient evidence to suggest that even the current large language model frameworks may already be giving rise to a sentient and early-stage general form of intelligence.
The debate at the moment is centring around whether or not text alone is sufficient to create an AGI system.
We’re dosing the Genie with crack cocaine
Some experts claim Super Intelligent AGI will happen in a matter of years, others predict decades.
Some think the appearance of AGI will be immediate, others think the takeoff will be gradual and then exponential.
Regardless of where you land on the quadrant, the key consideration is that nobody knows for sure how or when it happens.
And it probably doesn’t matter.
The reality is that it is becoming increasingly unlikely that things are going to stop, or even slow.
Amongst many experts, the prevailing view is that the ‘genie is already out of the bottle’ and there’s no turning back.
We are dosing the genie with crack cocaine.
Here are a few statements to reinforce the idea that we ain’t slowing down anytime soon.
- The value incentives are too high
The economic machine humanity has created (moloch) favors progress without full consideration of the negative consequences. When the potential positive outcomes for solving intelligence literally reach as high as immortality, incentives are pushing us beyond turbo mode.
- Companies creating these platforms are in a fierce competition loop
The incentives have us locked in a race. The biggest, richest companies in the world are all now in the race to AGI. They’re in fierce competition to get there first, they have trillions of dollars, tens of thousands of the smartest people in the world working for them and an endless assortment of greedy shareholders.
- Regulators and government bodies are corrupt and incompetent
Our current global regulators and government bodies have demonstrated through repeated corruption, bureaucracy and gross incompetence, that they lack the ability to safely regulate a chicken coop, let alone the world’s most transformative technology.
- Misaligned incentives
Competition extends to governments, nations and global corporations. Rather than collaborating to safely regulate this technology, every imaginable government body and global corporation is currently salivating over their ability to control, manipulate and extract the benefits of this technology to their own misaligned advantage. Everything from governmental surveillance and corporate espionage, to malicious cyber activities, war, and public opinion manipulation.
- The prevailing view that we need to do this on the fly
Many experts claim (rather convincingly) that the only way we’ll align AGI is by incrementally releasing versions and models to the public and adjusting on the fly.
- Underlying subconscious drivers
Many of the individuals creating AGI have an almost religious belief in the creation of a ‘god’ or super-intelligent life form.
Why now is a good time to reassess your life
Even *if* the best, most safe, slow and controlled evolution to a world with an ‘aligned’ Artificial General Intelligence unfolds, things are going to get unbelievably weird over the next few years.
More and more sophisticated forms of narrow toward general intelligence are going to be set free in the world.
These systems are going to fundamentally change the way the world works, how we communicate and interact with each other, and how we spend our time.
What should we do?
Unless you are absolutely certain that what you’re currently doing is:
a) imperative for your survival,
b) building toward something that won’t be abstracted away in a world inhabited with increasingly general intelligence, and/or
c) brings you so a shit tonne of joy, happiness and meaning
You should stop doing it immediately.
If you write poetry, it’s possible that there exists something special about the biological human experience.
Maybe the flaws are what make your musings enjoyable to other humans?
But if an AI system finds a cure to cancer, and you have cancer, you give zero fucks as to whether that cure was found by a human doctor, or an AI doctor.
Apply the above to however you’re spending your time.
Alternatively, if this all makes you uncomfortable, you can bury your head in the sand.