Overviews of Some Basic Models of Governments and International Cooperation

By AI Safety Fundamentals Team (Published on November 27, 2022)

Governments and international cooperation may be crucial for helping reduce the risks and realize the benefits of AI. To help inform readers who are relatively new to some related topics, this document compiles brief video overviews of some basic/introductory models related to governments, coordination problems, and international security. We expect many readers will already be familiar with some of these topics—we suggest you look over the whole list of links below and just watch the linked introductions to whatever topics you’re not already familiar with.

For the purposes of week 6 on our AI Governance Course, please make sure you understand:

  1. The bureaucratic politics model [1]
  2. Assurance games (aka stag hunts)
  3. Relative gains

However you are likely to find many of the resources below useful. Check out as many as you have time for now, and we recommend making a plan to work through each of these resources when you have the time.

Models

We are using “model” in a broad sense (e.g. not necessarily formal/quantitative).

1. How governments work:

2. Coordination problems and ways to solve them

3. International security concepts

Some potential applications to AI governance

(This list is tentative and very much non-exhaustive.)

On politics

  • Under various theories of change for AI involving the US government, broad (especially bipartisan) support from US policymakers is critical.
    • For the US to pass any legislation regulating AI, significant bipartisan support (60/100 Senators) in the US Senate would be needed (unless the filibuster is eliminated, in which case solid partisan support could suffice, although such legislation could be scrapped after a shift in power).
    • For the US to ratify any formal treaty on AI, strong bipartisan support in the US Senate (67/100 Senators) would be needed.
  • Interested government agencies and non-governmental organizations (especially wealthy ones) will have much influence on the prospects for and the final shape of any major AI-related government action.
    • As a result, political pragmatism may demand compromising between policy goals and these organizations' interests, such as by designing regulation to narrowly target especially high-risk AI activities (rather than regulating many low-risk activities just to be safe).
  • The U.S. has formal security alliances with the large majority of wealthy democracies, and it has strong trade relations. These relations may enable the U.S. to be especially influential on international AI policy. China is also a major trade partner of many nations, but its formal alliances are much more limited.

On incentives

  • If the development of unsafe AI involves the players and incentives of a prisoner’s dilemma, then changing the strategic situation (e.g. through deterring defection) will be needed to prevent bad outcomes.
  • If the development of unsafe AI involves the players and incentives of an assurance game, then relevant actors will have incentives to credibly signal their cooperation, and sufficiently successful assurance will cause mutual cooperation.
  • AI developers or countries may struggle to credibly commit to using AI for mutual benefit if they are the first to develop certain advances in AI.
    • This seems likely to make it harder to mitigate “winner-takes-all” competitive dynamics, except perhaps (to speculate ambitiously) through jointly controlled AI projects (since power-sharing institutions are classic, unusually robust commitment devices).
  • States’ concern over relative gains (and relative losses) will be an influential incentive in AI-related negotiations.
    • For example, this concern might make states more reluctant to agree to refrain from certain AI development projects, or to share access to AI.

Other

  • If private companies or governments unilaterally engage in globally significant AI activities, they may struggle to keep these activities secret from major governments, since major governments have extensive intelligence communities.
    • That might make unpopular unilateralist action harder while making it easier to verify compliance with cooperative agreements.
  • Unilateralism and multilateralism (as well as in-between approaches like bilateralism and plurilateralism) are high-level potential approaches to governing AI.
  • The effects of particular AI applications (e.g. content generation, drone maneuvering) will depend partly on their offense-defense balance—how much they facilitate defensive uses relative to offensive uses.
    • There’s no guarantee that AI advances will always favor offensive applications.

Footnotes

  1. Anecdotally, this model appears to be widely held by experienced US policy professionals, suggesting it is roughly right.

  2. This video discusses the offense-defense balance of fighting wars overall. We can also apply the concept of offense-defense to particular actions or technologies (e.g. how much does the publication of information about some software vulnerability help offensive actors relative to defensive actors?).

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.