This article has been published in de Telegraaf on the 13th of November.
Artificial intelligence is one of the biggest threats of the 21st century.
Technology that is smarter than humans. Sounds like a science fiction movie, but it is not. Because artificial intelligence (AI) is getting smarter at lightning speed. The most advanced neural networks already have 175 billion neurons, and they are growing 10 times as many every year. At this rate, at least in complexity, AI will overtake our own brains in about three years. Will that make AI immediately smarter than us? Probably not yet, but the chances of that happening are increasing every year.
Eventually, AI will probably be able to perform every imaginable task better than we can. It can then make scientific discoveries without human interference. AI will then, for example, develop the next generation of computers – even faster hardware, even better software – and the next generation of AI will also be smarter as a result. This is how technology creates what we call a ‘positive feedback loop’: ever smarter AI creates smarter AI again. Nobody knows exactly where this development will end. But the era of humans as the smartest, and therefore most powerful, inhabitants of the earth is, against this background, no longer a foregone conclusion. Control over a super-intelligent AI is therefore vital. AI scientists been working on this control problem for about two decades.
Unfortunately, they have not succeeded for now. Indeed, the more research is done, the further away a solution seems to be. There are now scientists who believe that superintelligent AI control is fundamentally impossible. And this is precisely why it is high time that we as a society, but certainly also our politicians, start talking about this.
So can we regulate superintelligent AI? But how do you regulate technology that does not yet exist? As a human species, we have never had to ask that question before. Will superpowers like the US, China or Russia go along with this? Terrorist networks like ISIS or al-Qaida? Because yes, AI could start being used by them too. A growing group of scientists see AI as one of the biggest threats of the coming century. For example, Toby Ord of Oxford University, a specialist in existential risk, thinks there is a roughly one in 10 chance that we will not survive AI. Elon Musk, an engineer with excellent knowledge of AI, has been sounding the alarm for some time. And Bill Gates is also very worried.
There are also scientists who think AI will never get a purpose that is not equal to our own. They think AI will always do what we, humans, demand of it. And there are those who think we can just pull the plug in case of emergency. But how sure are these sceptics that they are right, when AI safety researchers say otherwise?
Currently, we invest around $0.01 billion a year worldwide in research on existential risk and AI safety. So governments do not exactly prioritise the issue, nor does the Netherlands for the time being. It is high time that this changes. Our politicians and ourselves have to get used to the fact that this is not science fiction, but a real issue. We urgently need to get our heads out of the sand.
Otto Barten is director of the Existential Risk Observatory