The Looming Regulatory Gap: How AI Hardware Advancements Could Outpace Governance
This project was the runner up for the "Best Technical Governance Project" prize on our AI Governance (April 2024) course. The text below is an excerpt from the final project.
AI systems are growing increasingly powerful, and policymakers are turning to compute governance as a key strategy for regulating them. They often use a compute threshold — measured in floating-point operations (FLOPs) – to identify which models will be subject to their oversight. FLOPs represent the number of arithmetic operations performed by a computer and are used to measure the computational intensity of AI model training. This metric is important because it correlates with the complexity and potential capabilities of AI systems. By monitoring the computational resources used in AI training through FLOPs, policymakers aim to keep the development of frontier AI systems in check.
At the same time, AI hardware is advancing at a breakneck pace. We're seeing the development of increasingly efficient GPUs and the emergence of neuromorphic chips and quantum computing systems. These technologies could improve computational efficiency, potentially making it possible to train even larger and more complex models while maintaining or even reducing energy consumption and costs.
These advancements in AI hardware are going to necessitate significant changes to AI regulation – as AI hardware becomes more efficient, regulatory frameworks based solely on compute or cost metrics will quickly become obsolete or insufficient.
So, what does this mean for regulators? How can AI policy keep up with hardware breakthroughs?
In this post, we'll explore these questions by:
- Summarizing compute thresholds in current AI policy
- Scanning the AI hardware horizon and its implications for regulation
- Discussing challenges facing policymakers and opportunities for effective governance
To read the full project submission, click here.