Generative AI solutions like ChatGPT have captivated global attention given its human-like responsiveness and ability to rapidly generate coherent text. But ChatGPT is just one example of the breakthroughs in generative artificial intelligence (AI). Generative AI refers to AI systems that can create new content like text, images, audio, video, and code.

The foundation for Generative AI began many years ago, however. The rapid progress in generative AI over recent decades stems from key breakthroughs in neural networks, compute power, and data. Early research in the 1950s-1980s established the foundation of neural networks and machine learning. But it wasn’t until the 2000s and 2010s that deep learning allowed models with multiple neural net layers to be trained efficiently, unlocking their full potential. Generative adversarial networks (GANs), introduced by Ian Goodfellow in 2014, allowed models to generate strikingly realistic synthetic images and data. Recent years have seen an explosion in the capabilities of large language models like GPT-3 and image generators like Stable Diffusion, driven by increases in data and compute.