Last week, we proposed the Conditional AI Safety Treaty in TIME Magazine as a solution to AI’s existential risks. Read the full piece here: There Is a Solution to AI’s Existential Risk Problem.
AI poses a risk of human extinction, but this problem is not unsolvable. The Conditional AI Safety Treaty is a global response to avoid losing control over AI.
How does it work?
AI Alignment has so far been presented as a solution to existential risk. However, alignment has three main problems:
1) It is scientifically unsolved.
2) It is unclear which values we should align to.
3) Having one friendly AI does not necessarily stop other unfriendly ones.
Therefore, building upon the “if-then commitments” proposed by Bengio, Hinton, and others in “Managing extreme AI risks amid rapid progress” we propose a treaty where signatories agree that IF we get too close to loss of control AND alignment is not conclusively solved THEN they will halt unsafe training within their borders.
This treaty solves two issues:
1) Coordination. It is in the interest of signatories to verify each others’ compliance, and to make sure dangerous AI is not built elsewhere, either.
2) Timing. Some say AGI is nowhere near. We take their POV into account with the if-then part.
How close is too close to loss of control? This will remain a difficult question, but someone will need to answer it. We propose the AISIs do so. They have eval know-how, which can be extended to loss of control. Also, they are public and independent from the AI labs.
Under the Conditional AI Safety Treaty, we can still get most of AI’s benefits. All current AI: unaffected. Future narrow AIs (climate modelling, new medicines, nuclear fusion): unaffected. Future general AIs safer than a threshold: unaffected.
The Trump government might bring opportunities. Ivanka Trump is aware of the urgency of the problem. Elon Musk is a known xrisker. Tucker Carlson is concerned, as is Trump himself. A Trump unified government could be able to get this treaty accepted by China.
We think our proposal is going in the same direction as many others, such as by Max Tegmark (Future of Life Institute (FLI)), Connor Leahy (Conjecture), and Andrea Miotti (ControlAI). We welcome their great work and are open to converging towards the most optimal solution.
We realize that a lot of work needs to be done to get the Conditional AI Safety Treaty implemented and enforced. But we believe that it we really want to, these challenges are by no means beyond humanity’s reach.
We can solve existential risk, if we want to.