AIArtificial IntelligenceExistential risk Hardening against AI takeover is difficult, but we should try Over a decade ago, Eliezer Yudkowsky famously ran the AI box experiment, in which a…Otto BartenNovember 5, 2025
AIArtificial IntelligenceExistential risk AI Offense Defense Balance in a Multipolar World By Otto Barten and Sammy Martin Executive summary We examine whether intent-aligned defensive AI can…Otto BartenJuly 16, 2025
AIArtificial IntelligenceExistential risk Yes RAND, AI Could Really Cause Human Extinction Last month, think tank RAND published a report titled On the Extinction Risk from Artificial…Otto BartenJune 20, 2025
AIArtificial IntelligenceExistential risk AI has passed the Turing test In 1950, Alan Turing asked himself the question: "can machines think?" Turing started, as mathematicians…Otto BartenApril 3, 2025
AIArtificial IntelligenceExistential riskGlobal Media Our proposal in TIME: a Conditional AI Safety Treaty Last week, we proposed the Conditional AI Safety Treaty in TIME Magazine as a solution…Otto BartenNovember 20, 2024
AIExistential risk What does the breakdown of scaling laws imply? It is now public knowledge that multiple LLMs significantly larger than GPT-4 have been trained,…Otto BartenNovember 12, 2024
AIExistential risk Luck (AI Safety Summit Talks closing) This is a transcript of (most of) the closing talk by Otto Barten of the…Otto BartenMay 22, 2024
AIEventsExistential risk AI Safety Meetup: PauseAI’s Joep Meindertsma What is it like, campaigning for AI Safety? And how can you prevent AI risks?…Ruben DielemanMarch 5, 2024
Existential riskUncategorized New funds for Existential Risk Observatory! Existential Risk Observatory has been honoured to receive funding from the Longterm Future Fund! Next…Ruben DielemanJanuary 31, 2024
AIEventsExistential risk AI Safety Meetup ft. Koen Holtman Pakhuis de ZwijgerFebruary 20th, 19.30-21.30Sign up here: Eventbrite - Pakhuis de Zwijger What can we…Ruben DielemanJanuary 31, 2024