One of the biggest challenges facing the area of existential risk research is that of methodology. Several key features of existential risk severely restrict our ability to ensure accuracy when attempting to assess the nature or probabilities of specific risks.

For a start, by their very definition, there are no historical records of existential catastrophes affecting humans (otherwise we would be in our current position of researching them!). While there are instances of previous mass extinction events in the history of life on Earth, many of the risks we face are due to, or amplified by, human activities, making evidence from such mass extinctions of limited use.

Secondly, it is clearly infeasible to run experiments to directly observe an existential catastrophe, meaning that research is fundamentally limited to more theoretical means.

Thirdly, many of the hypothesised risks are posed by speculative emergent technologies such as artificial intelligence and biotechnology. Research into these sources of risk have to grapple with very limited knowledge of the specific (or even general) nature of how these technologies will function, and the impact that they will have.

*Some current methods*

In a 2019 paper, Simon Beard and his colleagues at the University of Cambridge’s *Centre for the Study of Existential Risk*, found that these factors and more has resulted in many predictions of existential risks being heavily dependent on the subjective judgements of an individual or group ^{1}. Beard *et al.* note that, *‘[o]f the 66 sources in our literature review, 45% relied, at least in part, on the subjective opinion of the author or others, without direct reference to specific models, data or analytical arguments. This included all the sources that discussed the potential threat from Artificial Intelligence, which many Existential Risk scholars believe to be the most significant.’*

One of the other methods discussed by Beard *et al.* was the use of pre-existing models to predict the likelihood of extreme scenarios within their respective domains, examples being the application of models of disease spread or global climate systems. While such studies represent a positive step towards scientific rigour, results produced through such means should be treated carefully as, in many cases, the underlying models were not designed with such extremal scenarios in mind and so may not represent them faithfully.

*Probabilistic peculiarities*

A more nuanced difficulty in predicting the likelihood of some existential risks was pointed out by Toby Ord, Rafaela Hillerbrand, and Anders Sandberg, in a paper published in 2010 ^{2}. They point out that, for any given probability estimate for some event, what that estimate is actually saying is the probability that the event occurs *given* that the reasoning used to arrive at that estimate is correct. For rare events this poses an issue if the probability that the estimate is misguided is higher than the estimated probability of the event occurring.

As a simple example (paraphrased from Ord *et al.*), suppose that in a paper, a group of researchers make a claim that a catastrophic event has a one in a billion chance of occurring, a value they arrive at through careful analysis and reasoning. Despite the authors’ careful deliberation over the possibility of this event, there is still a chance that the argument employed contains a fatal flaw that nullifies their estimate of the probability. For this example we shall assume a probability of one in a thousand that the reasoning is faulty ^{3}. Finally suppose that, if the reasoning in question is faulty, we still think it to be unlikely that the event occurs, giving it a one in a thousand chance. Ord *et al.* show that for this example, the true probability we should assign to the event occurring is a little more than one in a million – a thousand times greater than what the paper had claimed. This is due to the fact that the claimed probability is greatly outweighed by the probability of the employed reasoning being faulty.

Given the already imprecise methods used to arrive at many predictions of existential risk, this probabilistic phenomenon casts further shadow on risk estimates found in the existing literature.

*We don’t want empirical evidence*

All in all, there are currently major obstacles that existential risk researchers must overcome when attempting to make numerical estimates of the amount of risk that we face and there is still much work to be done in building the necessary scientific framework to increase the rigour of existential risk research as an academic discipline.

Though one such difficulty will remain. That is, predictions of the nature of existential catastrophes are fundamentally unverifiable. If all goes well, we will never observe such an event, and if it doesn’t, empirical evidence for or against a theoretical hypothesis will be of little consolation.

- S. Beard, T. Rowe, and J. Fox. An analysis and evaluation of methods currently used to quantify the likelihood of existential hazards. Futures, 115:102469, 2020. doi:https://doi.org/10.1016/j.futures.2019.102469.
- T. Ord, R. Hillerbrand, and A. Sandberg. Probing the im- probable: methodological challenges for risks with low probabilities and high stakes. Journal of Risk Research, 13(2):191–205, 2010. doi:https://doi.org/10.1080/13669870903126267.
- Ord
*et al*. argue that this estimate is likely to be optimistic