“‘The Game is Over’: Google’s DeepMind says it is on verge of achieving human-level AI.” That was The Independent’s headline yesterday. For those who believe that human-level AI can result in uncontrollable superintelligence, and most AI Safety researchers think so, this would mean alert stage ten. Is it really that dramatic? Yes and no.
DeepMind, which belongs to Google, is indeed very serious about making AGI (human-level AI). Until now, AI was only capable of performing specific tasks as well or better than a human, such as playing chess, playing Go, or driving a car. But the same AI could not both play Go and drive a car like a human can. Before AI truly becomes as smart as us, and thus capable of improving itself, it must be able to handle a broad spectrum of tasks just like us: Artificial General Intelligence (AGI). According to The Independent, DeepMind has now almost reached that point, which would pave the way for uncontrollable superintelligence.
However, what did DeepMind’s researcher Dr Nando de Freitas mean by his tweet “the game is over”? He jokingly asks AI programme Flamingo himself on Twitter. Flamingo says: “I think what he means by ‘the game is over’ is that everyone should focus on scaling up.” Nando de Freitas agrees the AI did indeed mean that. So he did not mean, he tells himself afterwards anyway, that DeepMind has already almost developed AGI.
This is me chatting with Flamingo ? about a photo of a recent tweet. ?seems to understand what I mean by game over. Can someone please explain it ? pic.twitter.com/6RCINtMWs0— Nando de Freitas ?️? (@NandoDF) May 17, 2022
Does this now mean that the press is wrong yet again, and that this is an interesting scientific development but nothing else to worry about? No, again, not that.
Gato, the DeepMind model that The Independent is talking about, can do more than 600 tasks well. Among them are very different tasks. It is no longer just about playing games, but also recognising images, chatting with people and even controlling robotic arms. And all for one model, trained only once. Estimates of how long it will be before we have AGI have shortened by 10 years with Gato, according to some.
And is DeepMind taking the necessary security measures, or is there a government forcing the company to do so, to make sure no uncontrollable superintelligence is coming? No, the paper only says about that: “The ethics and safety considerations of knowledge transfer may require substantial new research”. DeepMind has no plan to complete that “substantial new research” before there is any chance of AGI. While the company does have an AI Safety department, only about 30 of its 1,300 employees work here. Accordingly, the company itself says: “the recent progress in generalist models suggests that safety researchers, ethicists, and most importantly, the general public, should consider their risks and benefits”. This is very clear-cut language.
What could a country like the Netherlands do about this situation to ensure our safety too? At least not wait until the AGI is in place, because by then it will probably be too late to change course. What we can do now, however, are the following three measures, which we also suggested earlier in pieces in Het Parool and Trouw :
- Add existential risks due to artificial intelligence and other new technologies to the Integrated National Security Risk Assessment.
- Establish a Planning Office for Existential Risks, which will monitor how big the risks are and make proposals on how to reduce them.
- Appoint a director general for existential risks at the Ministry of Interior and Kingdom Relations.
It is not yet alert stage 10. Let’s make sure we don’t end up there either.