AI Governance (2024 April)

Safety Haven: Justifying and exploring an antitrust safe haven for AI safety research collaboration

By Ella Duus (Published on August 28, 2024)

This project was the winner of the "Best Policy Governance Project" prize on our AI Governance (April 2024) course. The text below is an excerpt from the final project.

This paper explores the antitrust implications of collaborative artificial intelligence (AI) safety research in private industry. Rigorous antitrust enforcement generally enhances competition and consumer welfare. However, increased competition can foster a dynamic where achieving technological superiority is prioritized over safety and security improvements. Many good-faith collaborations on AI ethics, safety, responsibility, and governance would likely run afoul of antitrust regulators. Establishing a safe haven for these collaborations could eliminate the “race to the bottom” dynamic and efficiently use resources to maximize AI’s benefit to society. Further research is needed on the competitive effects of AI safety collaboration and the optimal structure for a safe haven.

Varying safe havens for responsible artificial intelligence (AI) research have been proposed, including an antitrust one by Luke Muehlhauser. This paper does not purport to offer the definitive legal or economic analysis that is needed to form antitrust policy. It simply aims to make an exploratory contribution to an under-researched policy solution.

This paper also recognizes the varying perspectives of AI researchers on the importance and scientific merit of certain types of research relative to others. This paper assumes the stance that research from various communities, including AI ethics, AI safety, and AI governance, are all valuable. For brevity and consistency, the term “AI safety research” will be used going forwards to mean plainly any research that contributes to the safe and beneficial development of
AI, which includes all of the various modes of research listed above. This paper focuses on the AI safety research conducted in private companies, which are the primary focus of antitrust law.

The following research questions will be examined:

  • RQ1: Should AI developers be concerned about running afoul of antitrust regulators?
  • RQ2: How would AI safety collaborations be impacted by antitrust law?
  • RQ3: How would a safe haven change the incentives for AI safety collaboration, and how should it be designed?

To read the full project submission, click here.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.