Existential Risk Observatory propose the following policy measures to be implemented by as many countries as possible, but especially by the US and UK, after the UK AI Safety Summit 1-2 November 2023.

15 proposals in total are divided over three categories: Safety, Democracy & Openness, and Governance. These measures aim first and foremost to reduce human extinction risk, and second to promote democratic development of AGI and superintelligence, in case AI safety could be assured in the future.

We would like to acknowledge earlier proposals that have partially inspired ours by FLI, StopAI, PauseAI, Jaan Tallinn, and David Dalrymple.

  1. Implement an AI pause
  2. Create a licensing regime
  3. Mandate model evaluations
  4. Mandate third-party auditing
  5. Track frontier AI hardware
  6. Prohibit frontier capabilities research
  7. Publicly fund AI Safety research (but do not purchase hardware)
  8. Recognize AI extinction risk and communicate this to the public
  9. Make the AI Safety Summit the start of a democratic and inclusive process
  10. Organise AGI and superintelligence referendums
  11. Make AI labs’ control and alignment plans public
  12. Demand a reversibility guarantee for increases in frontier AI capabilities
  13. Establish an International AI Agency
  14. Establish liability for AI-caused harm
  15. Do not allow training LLMs on copyrighted content

For a more elaborate explanation of the proposals, click on this link.