AI PhD candidate Ruben Wiersma writes in Trouw in response to our earlier piece that AI does not yet have the skills of humans. This blog post is a response to the, in our view, ill-considered conclusions he draws from this fact.

True, at the moment AI is far from being able to do everything humans can do. But what is not yet the case can still come. In the future, AGI (artificial general intelligence) could be invented, a form of AI that can perform all cognitive tasks at least as skillfully as we do. That this will happen is not certain. Therefore, it also concerns a risk.

Wiersma does not dispute that the development of AGI could lead to extremely bad outcomes, up to human extinction. This is in line with many of his colleagues, at least half of whom do see this risk, according to surveys. But to talk about this so publicly? Wiersma thinks that “does not contribute to a healthy discussion”. It is bad for “image building”. Is a scientist or a PR person speaking here?

One would expect that Wiersma, if he is so concerned about the portrayal of his field, would refer to scientific publications that clearly show that the probability of extinction of humanity due to AI is zero per cent. This could be so either because we know for sure that AGI is never going to happen in a given time frame, or because we know for sure that AGI is not going to cause human extinction. But unfortunately, we don’t know either of these for sure. And that is exactly the problem.

AI researchers like Wiersma, instead of devoting their energy to downplaying the problem, could also contribute to a solution. AI Safety, the emerging academic field trying to make AGI safe, urgently needs AI talent like Wiersma. In the Netherlands, unfortunately, hardly any research is currently taking place in this field. But the Dutch government, together with universities and research institutes, could easily double the global research efforts in this field.

Let’s work together on solutions to minimise the future risks of AGI. This seems a lot more constructive to us than trivialising the problem until AGI is here, and then finding that we are too late.