Existential Risk Observatory propose the following policy measures to be implemented by as many countries as possible, but especially by the US and UK, after the UK AI Safety Summit 1-2 November 2023.
15 proposals in total are divided over three categories: Safety, Democracy & Openness, and Governance. These measures aim first and foremost to reduce human extinction risk, and second to promote democratic development of AGI and superintelligence, in case AI safety could be assured in the future.
We would like to acknowledge earlier proposals that have partially inspired ours by FLI, StopAI, PauseAI, Jaan Tallinn, and David Dalrymple.
- Implement an AI pause
- Create a licensing regime
- Mandate model evaluations
- Mandate third-party auditing
- Track frontier AI hardware
- Prohibit frontier capabilities research
- Publicly fund AI Safety research (but do not purchase hardware)
- Recognize AI extinction risk and communicate this to the public
- Make the AI Safety Summit the start of a democratic and inclusive process
- Organise AGI and superintelligence referendums
- Make AI labs’ control and alignment plans public
- Demand a reversibility guarantee for increases in frontier AI capabilities
- Establish an International AI Agency
- Establish liability for AI-caused harm
- Do not allow training LLMs on copyrighted content
For a more elaborate explanation of the proposals, click on this link.