AI versus AGI

Most of the artificial intelligence (AI) that we currently possess or develop is capable of doing only a single task, such as playing a game of Go or driving a car. This is called narrow AI. An AI trained to play a game of Go may outperform even an expert at this game, but it cannot drive a car. Such narrow AI is not an existential risk.

Nearly all AI academics expect that a different type of AI will become available in the future: artificial general intelligence (AGI). This type of AI, which does not exist yet, could do all tasks at least as well as humans can. Opposed to narrow AI, AGI could become an existential risk. Experts believe¹ there is a 50% chance that AGI will become available by 2040-2050, and a 90% chance by 2075.

Since winning conflicts, either peacefully or non-peacefully, is linked to intelligence, it may be hard to control an AGI with an intelligence exceeding our own. This causes the existential risk for unaligned AI.

AGI existential risk

If AGI does become available it could improve its own performance, since improving AI is one of the tasks which it can do better than humans can. Therefore, a positive feedback loop could start, and intelligence may increase rapidly after AGI is invented.

Since winning conflicts, either peacefully or non-peacefully, is linked to intelligence, it may be hard to control an AGI with an intelligence exceeding our own. This causes the existential risk for unaligned AI.

¹Müller, Vincent C., and Nick Bostrom. Future progress in artificial intelligence: A survey of expert opinion. Fundamental issues of artificial intelligence. Springer, Cham, 2016. 555-572.