Human extinction risk has increased from almost zero to an estimated likelihood of one in six in the next hundred years. We think this likelihood is unacceptably high. We also believe that the first step towards decreasing existential risk is awareness.

Therefore, the Existential Risk Observatory is committed to reducing human existential risk by informing the public debate.

The most important existential risks as recent research¹ identifies them are:
¹Ord, Toby, Oxford University. The precipice: existential risk and the future of humanity. Hachette Books, 2020
Unaligned AI

Future artificial intelligence (AI) with goals that may be different from ours

Man-made pandemics

Genetically modified pandemics leading to extinction

Climate change

Extreme climate change scenarios causing complete extinction

Nuclear war

Extreme nuclear war causing complete extinction

Total natural risk

The sum of all natural extinction risks, such as supervolcanoes and asteroids

Other man-made risks

Other man-made extinction risks, including technologies still to be invented


Their estimated chance of occurence in the next hundred years is presented below.
It can be seen that:
  • These existential risks are unacceptably high.
  • Man-made extinction risks, such as unaligned AI and man-made pandemics, are a lot riskier than natural ones. These risks are preventable in principle.

Our mission

Existential risk has risen to an unacceptable likelihood of one in six in the next hundred years. Since the sources of existential risk are mostly man-made, humanity has the power to reduce its own extinction risk. We must do so now.
At the Existential Risk Observatory, we believe that being aware of a problem is the first step towards solving it. Our mission is therefore to:

Increase existential risk awareness


Collect existential risk information


Spread existential risk information from academia to academia, think tanks, policy makers, and media.

“Existential risk reduction is among humanity’s most important and urgent challenges today. Unfortunately, it is also among the most neglected. So, I fully support the Existential Risk Observatory’s important mission!”

Andreas T. SchmidtAssociate Professor of Political Philosophy, University of Groningen

“Artificial general intelligence is an existential risk for humanity.”

Jan A. BergstraMAE

“We humans often worry about the wrong things. The Existential Risk Observatory wants to help us have the right priorities and focus on what is really dangerous, potentially even threatening the very existence of human civilization. It is a message that I approve of.”

Simon FriederichAssociate Professor of Philosophy of Science, University of Groningen

Otto Barten


Otto is a sustainable energy engineer, data scientist, and entrepreneur. When he realized that existential risks are even more important than climate, he started the Existential Risk Observatory.

Joep Sauren


Joep is an Industry 4.0 specialist and Managing Partner at Syndustry. As treasurer of the Existential Risk Observatory he keeps the foundation effectively organised and accounted.

Marko van der Wal


Marko has a degree in Classics. He is currently working as an editor at a publishing house and literary magazine, and is active as translator and (occasional) writer.