This article has been published in het Parool on the 8th of November, 2021.
Artificial intelligence is becoming an increasing part of our lives. And that is not without risk, argue Kritika Maheshwari and Otto Barten. They plead for mapping the risks.
At Facebook, the scandals are piling up. The tech giant, already under fire, was recently further exposed by whistleblower Frances Haugen – a former employee. Haugen directly attributes the storming of the Capitol, an American trauma, to the institutions, or revenue model, of Mark Zuckerberg’s company; the algorithms that thrive mainly on polarisation. Outside America, experts and stakeholders also point to Facebook as a platform that contributed to escalation. The polarising algorithms are said to be complicit in the conflict in Ethiopia, the genocide of the Rohingya in Myanmar and Muslim hatred in India.
Closer to home, we read about racist and discriminatory algorithms in the context of the benefits scandal. Amnesty International investigated the algorithms and concluded that the system used by the Inland Revenue was guilty of ethnic profiling and social class discrimination. When you consider that this case did not even involve advanced artificial intelligence and yet it caused such a fundamental violation of civil rights, you hold your breath when it comes to technological developments that are more intelligent than this algorithm.
It is therefore imperative that we seize these examples for a new conversation about to what extent we should allow tech and artificial intelligence into our lives. The warning shots are all serious, and it is time to engage in a conversation along both political and technological lines.
As a company or society, we ask something of an algorithm; for example, to offer us the most targeted products or identify fraudsters. Practice then shows what the life-sized risk of that technology can be: an increase in polarisation, of hatred and discrimination or the marking as fraudsters of thousands of innocent citizens. The smarter the algorithm, the greater the unintended side effects and the more difficult the correction. And in that observation lies the difficulty: technology can be smarter than we are and correction can be much more complex than desired.
And algorithms are only getting smarter. Currently, the most extensive neural networks already have 175 billion neurons. And this number increases about 10 times every year. This means that in three years, it is likely that we will have neural networks with a complexity not unlike that of our own brains. This does not necessarily mean that artificial intelligence will be able to think at our level by then (although some in the industry argue that it will), but it does mean that the hardware prerequisites for this seem to have been definitively created by then. If and when it subsequently becomes more intelligent than ourselves is speculation, but the likelihood of this happening is increasing every year. Most scientists and technologists think this is by no means an unthinkable scenario. To put it simply, artificial intelligence is already taking a run at us. Imagine what could happen if that neural network becomes smarter than our own brains.
It is high time for digital maturity. Our government – we as a society too – must not only name the serious risks related to future artificial intelligence, the existential risks , but also invest in them. We can do that by establishing a Planning Office for Existential Risks, which will monitor how big the risks are and make proposals on how to reduce them. Appoint a director general for existential risks at the Ministry of the Interior and Kingdom Relations. And add existential risks due to artificial intelligence and other new technologies to the Integrated National Security Risk Assessment.
In this case, precaution is necessary. Just look at how incredibly powerful a market player like Facebook with its potentially dangerous algorithms already has. A system that is more intelligent than us will not let itself be curbed retroactively.
Otto Barten is director of the Existential Risk Observatory.
Kritika Maheshwari holds a PhD from the University of Groningen on the ethics of risk.