This article was written in response to Floor Rusman’s op-ed in NRC Handelsblad of 9 April 2022.
The possible extinction of humans is a tricky subject. Yet Floor Rusman wrote an interesting op-ed about it (“Menselijke zelfvernietiging (Human self-destruction)”, NRC Handelsblad, Saturday 9 April). Unfortunately, the war in Ukraine and the related nuclear danger places this topic high on the agenda again.
In her op-ed, Rusman quotes Toby Ord; a leading researcher on existential risk at Oxford University who recently wrote the book The Precipice on the subject. He defines an existential risk as human extinction, a permanent collapse of our civilisation, or a dystopia from which we can never escape. A nuclear war followed by a nuclear winter could be such an existential risk. But there are more, and Ord has tried to put all the odds into numbers for the next 100 years. What does he end up with?
The chances of natural existential hazards, such as asteroids and super volcanoes, are very small, and also easy to estimate. Ord is therefore least concerned about these. One step higher on the existential risk ladder are known, and now to some extent ‘managed’, dangers such as nuclear wars and extreme climate change. That the climate is changing and will continue to change is a fact. Ord estimates the probability of that change leading to the complete extinction of humanity to be reasonably low at one in a thousand. Of course, this does not mean that we can lower our efforts in this area – preventing suffering for a large part of humanity and many animal species also deserves our full commitment.
On the highest level of the existential threat, however, Ord – and with him now many other researchers – place new technologies. With biotechnology, for example, scientists can create new viruses relatively easily. Yes, this is currently reserved for a handful of experts, but what if biotechnology becomes cheaper, simpler and therefore more widely available in the near future? The risk is that, sometime in the next hundred years, someone will cobble together a biological threat that could kill us all.
The existential risk that Ord and peers are most concerned about is that of artificial intelligence (AI) – and especially general artificial intelligence (AGI). This is AI that outperforms ourselves, humans, on all fronts. That type of AI easily outperforms humans in a game of Go or chess, but can also conduct AGI research better than human researchers. So, general intelligence on steroids. And there is a good chance that this AGI will be created sometime in the next century.
From then on, a ‘positive feedback loop’ may occur: ever smarter AGI creates ever smarter AGI as a self-feeding mechanism. And philosopher Ord’s fear is that from that moment on, we – humans – can no longer control that mechanism. And Ord is by no means alone in that fear; fellow philosopher Nick Bostrom, AI scientists like Stuart Russell and Peter Norvig and entrepreneurs like Elon Musk and Bill Gates also warn about this development. Simply put, they say that artificial intelligence is in the making, which will be smarter than the smartest human.
How do we face all these existential threats? Floor Rusman and Andrew Leigh point to defending democracy as a safeguard. This is important, of course, but is it going to solve existential risks? Leigh has a point when he says populists do not tend to solve long-term problems. But then, does the political middle have a plan to effectively address existential dangers – perhaps the biggest issues of the 21st century? There is still a world to be won there. Dutch political parties therefore urgently need to develop such a plan.
And the government also has work to do. Existential risks should be added to the Integrated Risk Analysis National Security to start with. This is a modest measure, but an excellent first step, because identifying risks is important for preparation, planning and strategising. Second, the government should appoint a director general for existential risks at the Ministry of Interior and Kingdom Relations. Third, a Planning Office for Existential Risks should be established. This research institute should monitor the level of risks and make concrete proposals for risk reduction. And lastly, and perhaps most importantly, the government and parliament should start talking about human extinction. What Floor Rusman can do, politicians must also dare to do. Because the communicating role of both parliament and government will also be crucial in ultimately reducing existential threats.
The war in Ukraine unfortunately makes it painfully clear that human extinction is a real option. It is to be hoped that this realisation does not paralyse us, but rather inspires us to take a rational approach to this problem.
Otto Barten is director of the Existential Risk Observatory
Kritika Maheshwari receives her PhD from the University of Groningen on the ethics of risk