“ChatGPT is really very stupid and completely unreliable,” writes Ilyaz Nasrullah in his op-ed (Trouw, Wednesday 4 January), about OpenAI’s recently launched AI model. In it, he argues that Large Language Models like ChatGPT have no “understanding” of what they do.

When I commissioned ChatGPT to write an opinion piece for NRC Handelsblad (“Remaining vigilant in blurring the line between real and AI”, NRC Handelsblad 15 December), the editor emailed me “it could have been an NRC editorial commentary”. Computer pioneer Alan Turing stated in 1950 that human-level AI would be realised if a human could no longer perceive the distinction between AI and real during a conversation (the Turing test). In my opinion, ChatGPT already fulfils this in many cases, as this editor’s email shows. This can apparently be done with or without what Nazrullah calls ‘understanding’.

But a more important point is that AI is not about the present, but about the future. Ten years ago, AI could do almost nothing. Now it not only writes pieces for newspapers but also creates images, wins games of GoStrategoStarCraft and Diplomacy from humans, and solves the protein folding question, a fundamental breakthrough in biology. Where will we be in 10 years, or 50?

I think AI is definitely going to surpass human cognitive abilities in the next decade. ChatGPT is not perfect yet, but it gives us a glimpse into the future. And in that future, AI will get vastly better, while human cognition will remain about the same. AI will therefore start creating more and more scientific breakthroughs. Among them will be improvements to algorithms and hardware, which will make AI even better. So ever smarter AI will start creating ever smarter AI, in a positive feedback loop that could well take it well beyond human cognition in all relevant domains.

This could create tremendous opportunities, but also carries enormous risks. Almost no jobs will remain unaffected. Economic inequality could increase enormously, especially if we do not have adequate redistribution mechanisms. And with economic shifts, fundamental power shifts may also emerge. Those who hitherto derived their income, self-esteem, and social position from cognitive labour should start scratching behind their ears.

And this is the positive scenario. There is also a group of AI scientists such as Stuart Russell (University of California – Berkeley) and philosophers such as Nick Bostrom and Toby Ord (University of Oxford) who believe that AI, if it is far superior to us cognitively, may become completely uncontrollable. Just as our craving for more and more resources is leading to the extinction of many animal species, the pursuit of an ill-conceived goal by a superior AI could lead to our evolutionary end, according to these scientists. As long as no one has shown that uncontrollable AI is impossible, these too are risks we should take seriously – and actively try to mitigate, for instance by conducting more research on them.

It is easy to give a few examples of what AI cannot yet do, and then conclude that we should not be so concerned. The discussion should instead be about what the future of AI will be, and how we can control the biggest risks posed by human-level AI in particular.