1 minute read

Large models

The speed of recent advances in AI and company announcements has been difficult to keep up with this year. As AI software models have improved, the technology has also required increasing training and computational capacity to power the systems. Before 2010, according to a 2022 paper by the University of Aberdeen’s Jaime Sevilla et al., AI training computer requirements increased in a similar pattern to Moore’s law, doubling every 20 months. Since 2015 and the development of large-scale models, there has been a 10 to 100-fold increase in the computational requirements of AI systems. For instance, in 2016, Google’s AlphaGo model, known for beating the world champion at the game Go, required 1.8 million petaFLOPs (one quadrillion floating point operations per second). Released in 2020, OpenAI’s GPT-3 required 314 million petaFLOPs to be trained.

This article is from: