AI Governance (2024 August)

Are you secretly training AI? Methods for uncovering covert AI training: A framework for feasibility and future research

By Naci Cankaya (Published on December 17, 2024)

This project won the "Best Technical Governance Project" prize for our AI Governance (August 2024) course. The text below is an excerpt from the final project.

This work aims to establish the basic feasibility of AI training detection methods while attempting to identify critical areas for future research and international cooperation.

As artificial intelligence systems grow more powerful, the ability to detect covert AI
training operations becomes crucial for effective governance and international security. This initial exploration presents possible approaches for identifying large-scale AI training facilities through their unavoidable physical signatures, including power consumption patterns, cooling infrastructure requirements, and computational hardware characteristics. Drawing on recent examples like the xAI Colossus facility, the analysis suggests that while individual signatures might be masked, the combination of multiple indicators makes detection feasible at scale. The discussion examines potential detection methods across thermal, power, and network domains, considers evasion strategies and their limitations, and outlines basic requirements for governance frameworks for international verification. The core argument proposes that focusing detection efforts on large-scale training operations, rather than attempting to track all AI development, could create a tractable approach to monitoring the most capable and potentially dangerous systems.

This paper is a theoretical exploration, not a presentation of empirical data. It is not a manual for implementation, but meant to encourage further investigation and practical research.

To view the full project submission, click here.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.