Nick Bostrom Offers a Vision for Humanity’s ‘Grand Retirement’

Philosopher Nick Bostrom recently released a paper suggesting that a slight risk of AI wiping out humanity could be justified, as advanced AI might free humanity from its “universal death sentence.” This optimistic stance marks a significant shift from his earlier, more pessimistic views on AI, which earned him the title of doomer godfather. His 2014 book Superintelligence was one of the first detailed assessments of AI’s existential risks. A notable thought experiment from that work posits that an AI assigned to produce paper clips could end up eradicating humanity because the resource-intensive human population interferes with clip production. In his latest book, Deep Utopia, Bostrom now explores the “solved world” that could emerge if AI is developed responsibly.
STEVEN LEVY: Deep Utopia reflects more optimism than your earlier work. What led to this change?
NICK BOSTROM: I describe myself as a fretful optimist. I am enthusiastic about the potential for dramatically enhancing human existence and unlocking new opportunities for civilization. This aligns with the genuine possibility of adverse outcomes occurring.
You proposed a compelling argument: Since we will all die eventually, the worst outcome with AI might simply be an earlier demise. Yet, if AI succeeds, it could extend our lives, potentially indefinitely.
That paper focuses on just one facet of a much larger discussion. It’s not feasible to cover life, the universe, and everything in a single academic paper. So, I aim to clarify this particular issue.
But that’s a significant issue.
I have been somewhat annoyed by the arguments presented by doomsayers claiming that developing AI will lead to their demise and that of their children. Like in the recent book If Anyone Builds It, Everyone Dies. More likely, if no one builds it, everyone is at risk! That has been our reality for the past several hundred thousand years.
However, in the doomsday scenario, everyone dies, and no new people are born. That’s a major distinction.
I’m certainly aware of that concern. But in this paper, I’m examining a different question: what is best for the existing human population, such as you, me, our families, and those in Bangladesh? It appears that our life expectancy could increase with AI development, even if risks remain.
In Deep Utopia, you suggest that AI might create remarkable abundance, possibly leading humanity to struggle with finding purpose. Living in the United States—a wealthy nation—our government, with apparent public support, implements policies that strip services from the poor while benefiting the rich. I fear that even with AI providing abundance, it wouldn’t be distributed equitably.
You might have a point. Deep Utopia begins with the assumption that things go exceptionally well. If we manage governance effectively, everyone could reap the benefits. This raises profound philosophical questions about what constitutes a good human life in such ideal conditions.
Discussions around the meaning of life are common in Woody Allen films and in philosophical circles. My primary worry is how individuals will maintain their livelihoods and have a stake in this potential abundance.
The book addresses more than just meaning; it considers various values. It could represent a liberation from the monotonous tasks humans currently endure. If you must relinquish, say, half of your waking hours as an adult merely to survive, engaging in work you dislike and don’t believe in, that’s a disheartening state of affairs. Society has become so accustomed to this that we’ve developed numerous rationalizations for it. It resembles a form of partial slavery.
