Their estimated chance of occurence in the next hundred years is presented below.
It can be seen that:
-
These existential risks are unacceptably high.
-
Man-made extinction risks, such as unaligned AI and man-made pandemics, are a lot riskier than natural ones. These risks are preventable in principle.
What is an Existential Risk?
There are many definitions of existential risk ranging from the simple definition of ‘the risk of human extinction’, to the more philosophically technical ‘the risk of a drastic loss of expected value’.
One of the more popular definitions is given by philosopher Toby Ord as: An existential risk is a risk that threatens the destruction of humanity’s longterm potential. Through use of this definition, Ord includes risks of human extinction, as well as unrecoverable civilisational collapse, or becoming trapped in a permanent state of extreme suffering.
Though there will be some slight variability between definitions, much of the research conducted on existential risks concerns scenarios common to most if not all definitions 1. A closely related notion is that of an existential catastrophe, defined as a particular event that causes the occurrence of an existential risk (however it has been defined).
As a result of existential risk research’s position as a relatively young field of enquiry, as well as its inherent speculative nature, only a small number of qualitative estimates have been made of the total amount of risk that humanity faces. In his book The Precipice: Existential Risk and the Future of Humanity, Toby Ord, a professor at Oxford University, estimates the total risk of an existential catastrophe occurring in the coming 100 years at approximately 1 in 6 2.
There is still sizeable disagreement on the amount of risk we face within the academic community, with estimates ranging from a less than 5% chance that humanity will cease to exist before the year 5,100 CE 3, to a 50% chance that we will not survive to the end of the current century 4. However, many see this uncertainty as a compelling reason in itself for why research into existential risk is important. Regardless of the true value of the amount of risk we face, we would like to know if we are safe, or if not, how much danger we are in.
What could cause an Existential Catastrophe?
AI
One of the more uncertain sources of existential risk is that from unaligned Artificial Intelligence (AI). Yet despite this uncertainty, many researchers predict it to be one of the most prominent sources of risk 5.
There are many types of hypothetical AI systems that are considered in relation to existential risk including: Artificial General Intelligence (AGI), an AI system that is proficient in all aspects of human intelligence; Human-Level Artificial Intelligence (HLAI), an AI that can at least match human capabilities in all aspects of intelligence; and Artificial Superintelligence (ASI), an AI system that is greatly more capable than humans in all areas of intelligence 6. These are all in contrast to contemporary AI systems (sometimes referred to as ‘narrow AI’) that can typically only perform well on a single cognitive task.
One of the main issues we face with more advanced AI systems is that of the ‘alignment problem’, that is, how can we ensure that it has values that align with our own so that it does not take actions with consequences that humans would find unfavourable. Another issue is that of the ‘control problem’; assuming ASI is possible, how do we guarantee that we can remain in control of it, given its far greater intelligence than our own. It is easy to see how failing to solve either of these two example problems could constitute an existential risk.
Nuclear
One of the first man-made sources for existential risk considered was that of nuclear war, with leading academics including Albert Einstein expressing their worries as early as the 1950’s 7.
One way in which nuclear combat could pose an existential risk is through a resulting nuclear winter. It is predicted that smoke caused by large-scale nuclear attacks would be lifted into the upper atmosphere, where it would be too high to be ‘rained out’. This smoke would encircle large portions of the Earth’s surface, blocking sunlight and thereby causing a drop in temperature of up to -30°c in some regions. This could drastically reduce agricultural output and result in widespread famine. Much of the existing research into nuclear winter has been carried out by Alan Robock and his team who have produced a handful of reports assessing the possibility through use of computational climate models. Despite a clear lack of historical examples (thankfully!), this scenario seems plausible due to observations of the volcanic eruption of Mount Tambora in 1815. The volcanic matter ejected by this eruption has been linked to a temperature decrease of up to 1°c the following summer, resulting in major food shortages 8.
While the chance of nuclear conflict currently seems low, it should be noted that almost 2,000 of the US and Russia’s nuclear weapons remain on ‘hair-trigger’ alert, prepared for immediate launch, to this day 9. This leaves the door open for disastrous accidental launches from either side. There is no shortage of documented historical instances in which we appear to have come unnervingly close to nuclear conflict 10.
Pandemics
Over the past 18 months we have seen that our global civilisation is susceptible to deadly pandemics. Despite evidence currently pointing to SARS-CoV-2 being a naturally occurring pathogen, the extent and speed of its spread throughout the human population has certainly been aided by features of human civilisation such as frequent global travel and the existence of densely populated urban centres.
Possible future pandemics may be linked even more closely to human actions. There is growing concern that increasing accessibility to the techniques of modern biotechnology could allow malevolent actors to design, synthesise, and release engineered pandemics that could be far more deadly than COVID.
Even lacking ill-intentioned actors, there are also prominent voices arguing that current biosafety standards at research facilities are insufficient and could potentially contribute to the accidental release of a deadly pathogen which was being researched for beneficial ends.
Overall, the Covid-19 pandemic has highlighted how ill-prepared global institutions are for responding to an event of this magnitude, pandemic or otherwise. There is still much work to be done in reducing the chances of another, more deadly pandemic, as well as increasing our resilience to pandemics in the case that one should occur.
Climate Change
It has already become clear that over the coming few decades climate change will be one of the greatest challenges humanity has faced. While the Paris Agreement aims to restrict the average global temperature to less than 1.5°c above pre-industrial levels, such levels of warming would greatly increase the chances of catastrophic weather events. Much of the work aimed at assessing the results of warming beyond this level is highly speculative, but there is concern that there exists a risk of a runaway greenhouse effect. This is the theory that, due to a number of mechanisms, once a certain level of warming has occurred, a positive feedback loop will form as the results of warmer global temperatures trigger further warming, independently of human action. If this cycle is not brought under control, it could lead to the Earth becomimg uninhabitable for complex lifeforms, including humans.
This possibility adds to the already strong case for limiting our carbon emissions in order to keep global average temperatures at a sustainable level.
Natural Risks
While most of the discussion of existential risks concerns anthropogenic risks (those that derive from or are significantly increased by human activities), there is also a number risks from natural sources.
Two of the most prominent natural risks are those posed by supervolcano eruptions and impacts from asteroids or comets. For both of these catastrophes, as for nuclear conflict, the main existential danger comes from an intense period of global climatic cooling caused by the release of smoke and dust into the upper atmosphere, blocking out much of the sunlight that would usually fall on the Earth’s surface.
Analysing natural risks presents less of a methodological challenge than anthropogenic risks due to the (albeit sparse) historical data of such events. It is well established that the most likely cause for the Cretaceous-Paleogene extinction event that occurred 66 million years ago, and caused the extinction of the non-avian dinosaurs, was the impact of a comet or asteroid up to 15km in diameter on the Yucatán peninsula in modern-day Mexico. There is also evidence that the eruption of Mount Toba 74,000 years ago caused global cooling of multiple degrees for multiple years 11.
According to current estimates, the total existential risk posed from natural sources is greatly overshadowed by those from anthropogenic sources.
Other & Unknown Risks
The above list is by no means a comprehensive account of all possible sources of existential risk. Other potential sources include speculative future technologies such as nanotechnology, that is, engineering at the atomic or molecular scales. Developments in nanotechnology may give rise to new synthetic materials with incredible physical properties, or self-sufficient nanorobots that operate on the scale of nanometres with the potential to revolutionise current scientific fields such as molecular medicine.
Additionally, we may be subject to risks that we are currently unaware of. There may be entire scientific domains that are yet to emerge, along with their own share of risks, that we are currently oblivious to. Or alternatively, key scientific discoveries that open the door to incomprehensibly powerful new technologies could be just around the corner. Finally, there may be possible, yet unprecedented natural phenomena that we have not yet considered due to lack of evidence.
How best to prepare for and react to such kinds of risks for which we have no current knowledge is a key challenge that we must face.
Risk Factors
Apart from the primary sources, there are other ways of conceptualising existential risk, one of them being by considering risk factors. Risk factors are situations, states of affairs, or events that, while they do not necessarily present an existential risk of their own, may magnify the probability of another existential risk occurring. For example, compare the relative likelihoods of an existential catastrophe through nuclear conflict during a period of war between two (or more) nuclear states, and during a period of relative world peace. While the war probably does not constitute an existential risk of its own accord, it may increase the chance of large-scale nuclear strikes and ensuing nuclear winter.
This example suggests two possible ways of reducing the probability of a nuclear existential catastrophe; by promoting a move away from the current strategy of nuclear deterrence and towards nuclear disarmament to reduce the chance of nuclear weapons being launched, or by promoting global peace to reduce the chance of war between nuclear (or non-nuclear) states 12. The former of these approaches addresses the risk itself, whilst the latter addresses a contributing risk factor.
- For an excellent discussion and comparison of many of the more common definitions see Phil Torres’ 2019 article ‘Existential Risks: A Philosophical Analysis’.
- According to his own definition cited above.
- Gott III, J. R. (1993). Implications of the Copernican principle for our future prospects. Nature, 363, 315-319.
- Rees, M. J. (2003). Our final century. Basic Books.
- These include Oxford professor of philosophy Nick Bostrom, and prominent computer scientists Stuart Russell and Roman Yampolskiy. Leading entrepreneurs such as Bill Gates and Elon Musk have also expressed concern regarding the dangers of Artificial Intelligence.
- These definitions are not strict and can vary between sources. For example, some authors use AGI to refer to what we have called HLAI.
- See here for more information.
- It is claimed that the resulting unprecedented weather served as inspiration for Mary Shelley’s Frankenstein, written during the summer of 1816.
- Source
- See the Future of Life Institutes excellent online recourse ‘Accidental Nuclear Way: a Timeline of Close Calls’ here.
- The eruption of Mount Tambora mentioned above is a more recent example, though this eruption was less than one hundredth the size of Toba.
- One could also consider a possible third pathway of increasing humanity’s resilience to the effects of nuclear conflict.