Some Talent Needs in AI Governance

By Sam Clarke (Published on September 11, 2023)

This was (cross-posted to our site).

I carried out a short project to better understand talent needs in AI governance. This post reports on my findings.

How this post could be helpful:

  • If you’re trying to upskill in AI governance, this post could help you to understand the kinds of work and skills that are in demand.
  • If you’re a field-builder trying to find or upskill people to work in AI governance, this post could help you to understand what talent search/development efforts are especially valuable.

Key takeaways

I talked with a small number of people hiring in AI governance—in research organisations, policy think tanks and AI labs—about the kinds of people they’re looking for. Those hiring needs can be summarised as follows:

  • All the organisations/teams I talked to are interested in hiring people to do policy development work—i.e. developing concrete proposals about what key actors (e.g. governments, AI labs) should do to make AI go well.
    • There’s currently high demand for this kind of work, because windows of opportunity to implement useful policies have begun arising more frequently.
    • There’s also a limited supply of people who can do it, partly because it requires the ability to do both (a) high-level strategising about the net value of different policies and (b) tactical implementation analysis about what, concretely should be done by people at the government/AI lab/etc. to implement the policy. [1] This is an unusual combination of skills, but one which is highly valuable to develop.
  • AI governance research organisations (specifically, GovAI and Rethink Priorities) are also interested in hiring people to do other kinds of AI governance research—e.g. carrying out research projects in compute governance or corporate governance, or writing touchstone pieces explaining important ideas.
  • AI governance teams at policy think tanks and AI labs are interested in hiring people whose work would substantially involve engaging with people to do stakeholder management, consensus building and other activities to help with the implementation of policy actions.
  • Also, there is a lot of work requiring technical expertise (e.g. hardware engineering, information security, machine learning) that would be valuable for AI governance. Especially undersupplied are technical researchers who can answer questions that are not yet well-scoped (i.e. where the questions require additional clarifying before they are crisp and well-specified). Doing this well requires an aptitude for high-level strategic thinking, along with technical expertise.


  • I conducted semi-structured interviews with a small number of people hiring in AI governance—in research organisations, policy think tanks and AI labs—about the kinds of people they’re looking for.
  • I also talked with two people about talent needs in technical work for AI governance.


Talent needs

I report on the kinds of work that people I interviewed are looking to hire for, and outline some useful skills for doing this work.

Note: when I say things like “organisation X is interested in hiring people to do such-and-such,” this doesn’t imply that they are definitely soon going to be hiring for exactly these roles. It should instead be read as a claim about the broad kind of talent they are likely to be looking for when they next open a hiring round.

AI governance research organisations

GovAI is particularly interested in hiring researchers to develop and execute on some valuable research agenda, who can operate with a high degree of autonomy.

  • Currently, GovAI is especially interested in research agendas that contribute to policy development work—i.e. developing concrete proposals about what key actors (e.g. governments, AI labs) should do to make AI go well. There’s high demand for this kind of work and very few people who can do it.
  • Researchers who can write touchstone pieces explaining, clarifying, and justifying important ideas are also highly valued.

Rethink Priorities (AI Governance & Strategy team) is also interested in hiring researchers to develop and execute on some valuable research agenda, including but not limited to policy development work.

  • They’re hoping to hire for each of their four current focus areas, which are:
    • Compute governance
    • Lab governance
    • China (i.e. China-West relations of relevance to AI, and/or AI-relevant developments in China)
    • US regulation/legislation that affects leading AI labs/models.
  • For a bit more info on these areas, see this Two-pager on Rethink Priorities’ AI Governance & Strategy team.
  • They’ve also recently hired a research manager, which was previously a bottleneck to their growth.

Some useful skills for research

Skills that these organisations are looking for in their researcher hiring include: [2]

  • Domain knowledge/subject expertise. Although being familiar with a range of areas can be helpful, it is often very valuable to know a lot about one or two particular topics – ones that are especially important and where few other experts exist.
    • Some example relevant subjects include: AI hardware, information security, the Chinese AI industry, …
  • Comfort with quantitative analysis. Even if you don’t often use quantitative research methods yourself, it will probably be useful to read and understand quantitative analyses a non-trivial amount of the time. So, although it is definitely not necessary to have a STEM background, it is useful to be comfortable dealing with topics like probability, statistics, and expected value.
  • Ability to get up to speed in an area quickly.
  • Good epistemics, in particular:
    • Scout mindset. The motivation to see things as they are, not as you wish they were; to clearly and self-critically evaluate the strongest arguments on both sides. See this book for more.
    • Reasoning transparency. Communicating in a way that prioritises the sharing of information about underlying general thinking processes. See this post for more.
    • Appropriately weighing evidence. Having an accurate sense of how much information different types of evidence—e.g., regression analyses, expert opinions, game theory models, historical trends, and common sense—provide is crucial for reaching an overall opinion on a question. In general, researchers should be wary of over-valuing a particular form of evidence, e.g., deferring too much to experts or making strong claims based on a single game theory model or empirical study.
  • Using abstraction well. Abstraction—ignoring details to simplify the topic you’re thinking about—is an essential tool for reasoning, especially about macro issues. It saves you cognitive effort, allows you to reason about a larger set of similar cases at the same time, and prompts you to think more crisply. However, details will often matter a lot in practice, and people can underestimate how much predictive power they lose by abstracting.
  • Rigour and attention to detail.
  • Writing. See this post for some thoughts on why and how to improve at writing.
  • Impact focus. The motivation to have an impact through your research, and ability to reason about what it takes to produce this impact. Being scope sensitive in the way you think about impact.

Some useful skills for policy development research

Along with the skills in the preceding subsection, the following skills are useful for policy development research, specifically.

  • Familiarity with relevant institutions (e.g. governments, AI labs)
    • E.g. how policymaking works in the institution; knowing the difference between the on-paper and in practice versions of that; knowing how to ask questions which elucidate that difference; understanding the current political climate in the institution.
    • Actually having experience in/adjacent to the institution is very helpful, though not strictly necessary.
  • High-level strategising about the net value of different policy actions. More concretely, the skill of generating, structuring, and weighing considerations that matter for the usefulness and feasibility of some policy action. See the first bullet point here for more explanation of this skill.
  • Using abstraction well can be especially important for policy development work.
    • For instance, sometimes it might be appropriate to evaluate the usefulness of some high level category of policy actions (e.g. AI non-proliferation agreements, generally).
    • Whereas other times, it might be better to consider the usefulness of more concrete actions (e.g. should such-and-such frontier AI labs adopt such-and-such model evaluation procedures?)
    • It’s important to know when you can ignore concrete details in thinking about policies, and when they matter.
  • Knowledge about AI (e.g. roughly how modern AI systems work) and AI threat models.

Policy think tanks

Some relevant policy think tanks are interested in hiring policy development researchers to figure out what policy actions key governments should take to make AI go well; to translate that into a concrete [3] plan for implementing those policy actions; and to kick off the implementation of that plan.

Some useful skills for government-facing AI policy development work

Along with the skills for policy development research mentioned above, the following skills are useful for doing more government-facing AI policy development work.

  • Having the social skills to work with others and manage different stakeholders.
  • Being comfortable learning about STEM topics. This work will often involve engaging with the details of relevant technologies (e.g. semiconductors, semiconductor fabrication plants, alternative (e.g. optical) hardware for AI chips)—so having a sufficiently strong STEM background to be able to learn quickly about these topics tends to be useful.
  • Being comfortable with sprinting, e.g. being able to quickly spin up a decision memo in response to a temporary policy window of opportunity.
    • The comparative advantage of policy development researchers operating close to government decision-making (compared to academic/independent researchers) is in quickly developing concrete policy actions and actually getting them implemented (rather than thinking about more foundational questions). This point also applies to policy development researchers within AI labs.
  • A certain kind of agility is useful. Important strategic, political, bureaucratic and technical facts will change; it’s important to quickly incorporate these changes into your plans and priorities.
  • Being comfortable working autonomously, and having enough belief in your abilities to overcome hurdles.
  • Being comfortable with decision-making under uncertainty. In particular, being able to learn from incorrect decisions without beating yourself up about them, and orient to mistakes as part of your accumulated wisdom.

AI labs

Some governance teams at relevant AI labs are interested in hiring two kinds of profiles:

1) Policy development researchers to figure out what policy actions the lab should take, and translate that into a concrete plan for implementing those policy actions.

(Useful skills for this kind of work are covered above.)

2) People to do stakeholder managementconsensus building and internal education within the lab, to help with the implementation of policy actions.

Some useful skills for stakeholder management work

  • A good understanding of how decision-making works within the lab
  • Strong social skills, emotional intelligence and verbal communication
  • Professionalism

Technical work for AI governance

I also talked with two people with relevant expertise about technical work in AI governance. Some potentially useful information from those conversations:

  • There are several areas of technical work that could be valuable for AI governance:
    • Developing model evaluations for extreme risks (more)
    • Improving information security at organisations working on AGI development and their suppliers (more)
    • Forecasting on questions related to the development of advanced AI (more)
    • Investigating questions related to AI hardware, e.g. the technical feasibility of tamper-proof monitoring/verification of AI training runs
    • Other miscellaneous compute governance work
  • Some of this work can be contracted to technical researchers who aren’t necessarily plugged into the AI governance community. However, some important questions are difficult to neatly scope, which makes them hard to farm out. Additional clarifying or changing of the question is part of the work. An example of a question like this is: “how good will decentralised AI training [4] get?”It would be useful to have more technical researchers who can answer these kinds of poorly scoped questions. Doing this well centrally requires technical expertise, plus an aptitude for macrostrategy work.

Some areas of improvement for junior researchers

Some people hiring in AI governance mentioned areas where junior researchers tend to be less skilled. I summarise these findings. They should be treated as anecdotal evidence, and will only apply to some people.

  • Knowing a lot of facts can be underrated. Especially for policy development (and other work that requires high-level strategising), it’s useful to know a lot of relevant facts about the world. People who are comfortable moving out of Abstraction Land, and learning about/engaging with detailed concrete facts about the world seem to be undersupplied. Some particularly relevant domains where knowing a lot of facts can be helpful:
    • How policymaking works in relevant jurisdictions (what are the powers of institutions and how do decisions in fact get made)
    • Some level of understanding of technical AI knowledge, e.g. how cutting edge AI systems are trained
    • Having a repertoire of relevant case studies on hand (e.g. how cybersecurity was regulated in the US)
    • Relevant areas of law (e.g. competition law, IP, privacy, product safety, …)
  • Having context can be overrated. Junior researchers can focus too much on acquiring context on AI governance rather than on developing other skills.
    • By “context on AI governance”, I mean understanding who’s doing what in AI governance, and why (e.g. “organisation X has people working on Y at the moment”). You might call this the “inside baseball” of AI governance.
    • Whilst this is useful, it’s easy to learn and is therefore given less importance in many hiring processes (compared to most other skills mentioned in this post).
  • Writing can be underrated. Some people seem partly bottlenecked by their writing ability, and writing is a skill that tends to be relatively easy to become good at (compared to reasoning, for example). So it can be pretty valuable for those people to skill up on writing.
  • The ability to break down complex questions in a useful way (see generating, structuring and weighing considerations) is a key area for improvement for some junior researchers.
  • For some roles that exist within corporate or political structures, the ability to signal maturity and professionalism is useful.

Thanks to the people I interviewed as part of this project; to Kuhan Jeyapragasan for feedback; to Ben Garfinkel for feedback and research guidance; and to Stephanie Hall for support.


  1. This kind of tactical implementation analysis requires detailed understanding of how policymaking works within the relevant institution.

  2. NB this list of skills, and the ones which follow in subsequent sections, aren’t necessarily endorsed by people hiring at the organisations in question. (Though the lists were informed by the interviews I conducted.)

  3. To give a sense for the level of concreteness that’s desired here, it would be something like: “[this office] should use [this authority] to put in place [this regulation] which will have [these technical details]. [These things] could go wrong, and [these strategies should adequately mitigate those downsides].”

  4. “Decentralised AI training” refers to AI training runs that are distributed over many smaller compute clusters, rather than a single large compute cluster.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.