Existential risk has increased from almost zero to an estimated likelihood of one in six in the next hundred years, according to research from Oxford’s Future of Humanity Institute. We think this likelihood is unacceptably high.

We also believe that the first step towards decreasing existential risk is awareness. Therefore, the Existential Risk Observatory is committed to reducing existential risk by informing the public debate.

The most important existential risks as recent research¹ identifies them are:
¹Ord, Toby, Oxford University. The precipice: existential risk and the future of humanity. Hachette Books, 2020
Unaligned AI

Future artificial intelligence (AI) with goals that may be different from ours

Man-made pandemics

Genetically modified pandemics leading to extinction

Climate change

Extreme climate change scenarios causing complete extinction

Nuclear war

Extreme nuclear war causing complete extinction

Total natural risk

The sum of all natural extinction risks, such as supervolcanoes and asteroids

Other man-made risks

Other man-made extinction risks, including technologies still to be invented

 

In the media

“Existential risk reduction is among humanity’s most important and urgent challenges today. Unfortunately, it is also among the most neglected. So, I fully support the Existential Risk Observatory’s important mission!”

Andreas T. SchmidtAssociate Professor of Political Philosophy, University of Groningen

“Artificial general intelligence is an existential risk for humanity.”

Jan A. BergstraProfessor emeritus of Computer Science

“We humans often worry about the wrong things. The Existential Risk Observatory wants to help us have the right priorities and focus on what is really dangerous, potentially even threatening the very existence of human civilization. It is a message that I approve of.”

Simon FriederichAssociate Professor of Philosophy of Science, University of Groningen

Otto Barten

Director

Otto is a sustainable energy engineer, data scientist, and entrepreneur. When he realized that existential risks are even more important than climate, he started the Existential Risk Observatory.

Joep Sauren

Treasurer

Joep is an Industry 4.0 specialist and Managing Partner at Syndustry. As treasurer of the Existential Risk Observatory he keeps the foundation effectively organised and accounted.

Marko van der Wal

Secretary

Marko has a degree in Classics. He is currently working as an editor at a publishing house and at a literary magazine, and is active as translator and (occasional) writer.

Ruben Dieleman

Ruben Dieleman

Campaign Manager

Ruben has a background in political science, journalism, and campaigning. He wants to contribute to a better world. For the Observatory, Ruben focuses on campaigning and organizing events.

Nik Samoylov

Senior Campaigner

Nik Samoylov is the founder of Conjointly and the Campaign for AI Safety, which merged with the Existential Risk Observatory in 2024. Nik has a background in marketing and experience as a management consultant and is passionate about AI Safety.

Sue Anne Wong

Policy researcher

Sue Anne has a background in regulatory policy in government and a tertiary qualification in economics. She prepares submissions to government inquiries and policy proposals to increase safety and reduce the existential risks of AI.

Annelene Schulze

Junior Researcher

Annelene has degrees in Mathematics, Modelling and Machine Learning, and Physics. She is doing research into technical possibilities for how to pause AI for a longer period.

Jesper Heshusius

Campaign Assistant

Jesper has a background in analytic philosophy and applied ethics. He is interested in normative questions around existential risks. At the Observatory, he assists with organizing events.

Rebecca Scholefield

Junior researcher

Rebecca has a degree in History and German from the University of Oxford and is interested in AI governance. At the Observatory, she is researching the Conditional AI Safety Treaty.

Jon Khan

Junior researcher

Jon has a background in research on nanotechnology. They are interested in AI existential risk and are working on understanding the mechanisms behind a successful AI Pause.

Alexia Georgiadis

Researcher

Alexia has a background in political and economic sciences with a focus on governance and development. She is responsible for research on the effectiveness of communicating existential risk.

Holly Warner

Researcher

Holly has a research background in social anthropology and is a postdoctoral researcher with a focus on technological futures. At the Observatory she has worked on AI governance policy proposals.

Kali Richards

Media and Fundraising

Kali is a student of political science interested in effective altruism and policy focused on the long-term benefit of humanity. At the observatory she is responsible for media and fundraising outreach.

Francesca Fleurbaay

Francesca Fleurbaay

Conference organizer

Francesca has a background in psychology (University College Groningen) and is responsible for the organization of the Existential Risk Conference. She is excited to spread awareness about existential risk!

Ayushmaan Sharma

Junior researcher

Ayushmaan has a background in Biochemistry and interests in Policy, Governance, and Global Health. At the Observatory, he is conducting existential risk awareness and communication research.