It is now public knowledge that multiple LLMs significantly larger than GPT-4 have been trained, but they have not performed much better. That means scaling laws have broken down. What does this mean for existential risk?

Leading labs such as OpenAI are no longer betting on ever-larger training runs, but are trying to increase their models’ capabilities in other ways. As Ilya Sutskever says: “The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Everyone is looking for the next thing. Scaling the right thing matters more now than ever.” Some might say, we are back where we started.

However, hardware progress has continued. As can be seen in the graph below, compute is rapidly leaving human brains in the dust. It doesn’t appear like we have quite figured out the AGI algorithm yet, despite what Sam Altman might say. But more and more startups, and then academics, and finally everyone, will be in a position to try out their ideas. This is by no means a safer situation than one where only a few leading labs need to be watched.

It is still quite likely AGI will be invented in a relevant timespan, for example the next five to ten years. Therefore, we need to continue informing the public about its existential risks, and we need to continue proposing helpful regulation to policymakers. Our work is just getting started.