How to avoid the 4 mistakes behind 92% of rejected AI governance applications

By Adam Jones (Published on April 5, 2024)

As we open up applications for our Mid 2024 AI Safety Fundamentals AI governance course, I wanted to help prospective applicants understand how to maximise their chances of success. On reviewing the data from our previous application round,[1] I was surprised to find that just 4 mistakes account for 92% of rejected AI governance applicants:

  • 37%: Overly vague path to impact
  • 32%: Misunderstanding the course’s focus
  • 17%: Overly complex path to impact
  • 6%: Stating contradictory goals

This article digs into what these mistakes are and how you might fix them, as well as gives some general advice to further improve your chances of success.

37%: Overly vague path to impact

The most common pitfall we see is applicants not clearly articulating how they plan to use the course to pursue a high-impact career. Ultimately, this is why we exist to the point that it’s baked into our legal documents.

This is a made-up example of the kind of application we think is too vague for us to accurately assess:

Application too vague: Over the next 5 years, AI is likely to radically transform our society. I want to be on the forefront of influencing how AI is used safely, and think the knowledge and skills I could gain on the course will help me achieve this. I want to use this course to learn more about AI governance, so that I can use this knowledge in my career to make better decisions about AI. I’m particularly excited to cover a wide range of topics to get me a broad understanding of AI governance and help me in my career. Lastly, it’s been a long term goal of mine to become a leader in my field, and I think this course will be a useful stepping stone towards that.

The strongest applications specify the concrete next steps they will take after completing the course, how the course will help them achieve those steps, and how these steps could contribute to AI safety. Note that it’s perfectly fine not to have a perfectly planned out future, or not be certain you want to do AI governance! However, we do expect you to explain the paths you’re considering and why, and how this course will contribute to that process.

Additionally, if your plan involves a significant change it can be helpful to provide some evidence you have thought this through carefully. This might be something like explaining what got you thinking about the change in the first place, and if it’s a major career pivot why you think you might be suited to this kind of work - 80,000 Hours’s career guide is a great starting point to evaluate your fit.

We think there are many plausible and strong paths to impact. Made-up examples of strong applicants:

Working in a relevant role already: I work in a tech policy think tank and I’ve recently switched to a role focusing on AI policy. I come from a background in healthtech policy, and know how valuable it can be to have a strong understanding of the subject matter. Taking this course will allow me to get that understanding of AI policy, so that I can be more productive in my role, and be able to write higher-quality policy papers that are better informed by the available evidence. I also would be interested in using the connections I gain from the course to help me develop my network, so I have people I can ask questions to when I don’t know the answers!

Planning a relevant career change: I’m working as an actuary at a large insurance company, but want to change career to pursue AI governance research. I spoke to a few friends in AI governance research, and it sounds like I could be a good fit - I could use my professional background as an actuary to work on things like AI risk estimation and management. To test this further, I spent an afternoon last weekend putting together a small Google Sheet to estimate the increased cybersecurity risk from AI over the next 10 years, and found it a fun exercise. Taking the course will give me further information about where my skills could be most useful, and help me develop my network so that I am exposed to more opportunities in the field. After the course, I plan to apply to roles in AI governance - I’ve seen a few that look interesting at GovAI or Epoch. I’d hope that the work here could help us better quantify AI risks, and therefore help prioritise how we might tackle them.

Finishing higher education: I’ll soon be finishing my master’s degree in international relations. I’m quite concerned about international conflict being accelerated by the use of AI tools. I’m not certain whether I want to do this or go into climate change advocacy work, so I’d like to take this course to understand more about the kind of AI policy work that is going on today, as well as what the opportunities in the field look like. After the course, if I do decide to pursue AI policy instead of climate advocacy I plan to apply to the TechCongress and Horizon fellowships, think tanks, or maybe try to become a congressional staffer.

Conducting further study: I’m part way through my master’s degree in computer science. I’m thinking about applying for PhD programmes in AI. I’m passionate about AI safety and am involved in running a local AI safety group, but I’m not certain whether I want to do technical alignment research or do more on the governance side of things. I’ve heard that AI governance needs technical people, and have spent a bit of time looking at some AI governance papers. I’d like to take this course to get a better idea of my fit for governance work, so that I can decide between a PhD in alignment or governance - plus be able to better evaluate how useful different topics within an AI governance PhD might be. I’d hope that my PhD work would improve an area of AI policy so that we can make AI safer.

32%: Misunderstanding the course’s focus

Another major disconnect we observe is applicants misunderstanding what our AI Governance course aims to deliver. The core purpose is to equip people to pursue careers shaping AI policy and regulations, whether as government advisors, think tank researchers, or other governance roles.

It does NOT prepare people professionals working in general corporate governance, data protection or legal roles. While those domains intersect with AI governance, the focus here is much broader - on overarching policies and regulatory frameworks to mitigate societal risks from advanced AI systems.

Similarly, this program is not intended to get people into technical engineering roles like machine learning engineering, nor does it comprehensively cover the rapidly evolving field of AI alignment research. Those looking for the latter are much better served by our separate AI Alignment course!

Not a good fit, general lawyer: I work as a lawyer at a bank in charge of regulatory compliance. Over the next few years I’ll be responsible for our AI governance, in particular our compliance with the EU AI Act. I’d like to take this course to understand the regulations and how we can implement robust governance procedures within our organisation. This would help improve AI safety by ensuring we’re treating our customers fairly.

Good fit, relevant lawyer: I work as a lawyer in Ireland’s Data Protection Commission, the government authority responsible for regulating companies including Google, Meta and OpenAI. Over the next few years I’ll be responsible for our AI policy, in particular how our regulatory strategy will interact with the EU AI Act. I’d like to take this course to understand how different AI regulations are being implemented in other countries, as well as learn about other AI policies being worked on.

17%: Overly complex path to impact

Another common issue we see is applicants outlining career trajectories that are excessively convoluted or contain too many degrees of separation from concrete AI governance work. For example:

Convoluted path to working in AI governance: I’m in the last year of my bachelor’s degree in computer science. I then plan to take a master’s degree and maybe a PhD in environmental policy, because I’m also interested in climate. I will then work in investment banking for a few years to get professional experience working in high-performance organisations and insights into the financial incentives of tech companies. Then, I’ll found a startup that will help build AI-powered developer tools. If that goes well, I’ll use the skills and money I’ve gained from this to start a body that does AI auditing on frontier models to help with AI governance.

We absolutely welcome non-traditional pathways into the field, and accept a wide variety of people we’re able to make a case for: this usually requires a small chance of wild success, or a high chance of moderate success. However, some applicants outline trajectories that appear so convoluted that the likelihood of the course learnings translating into real-world impact seems exceptionally remote, without extraordinary potential upside to offset the long odds.

In general, we recommend you try going into AI governance earlier rather than later. If you later find that you need certain skills, you can always get them then.

If your current plan is to do unrelated work for multiple years, we recommend you consider applying to our course when you’re closer to working in AI governance. (However, there’s no harm in applying in this round and reapplying again.)

6%: Stating contradictory goals

A small portion of rejected applications state contradictory goals or plans. This makes it hard for us to evaluate the applicant’s actual path to impact and decreases our confidence in their application.

Contradictory plans: I’ll be starting a graduate role in the EU Commission in the next few months working on AI policy, which I’m looking forward to! I’ve just got 3 more months of my master’s degree in data science to go. I think immediately after my degree I want to apply to YC to found a tech company building digital forensics tools so that I can help different police forces track down cybercrimes. I think learning about AI governance on this course will help me make sure any AI systems used in our product are safe.

It's perfectly acceptable to have multiple potential paths you're considering and some uncertainty about the ideal option. What raises flags is definitively stating multiple objectives that seem incompatible based on the information provided.

If you aren't entirely firm on your future direction yet, it's better to acknowledge that uncertainty upfront rather than forcing a narrative that doesn't hold together across your responses. We understand career interests can be iterative! Just make sure you aren't presenting rigidly misaligned targets.

General application tips

Beyond addressing those core issues, here are some additional tips for submitting a strong application:

Highlight impressive or relevant experience, even if it’s not a ‘formal’ qualification. Things that people often miss in their applications:

  • Research projects, personal blogs or internships in policy or AI safety
  • Running professional or university groups related to AI safety
  • Completing other independent upskilling, especially other online courses
  • Projects or voluntary work outside your default education or job path, including those that aren’t specifically related to policy or AI governance

Make your application easy to understand. Avoid over-reliance on unexplained acronyms, numerical scores, or inside references that require contextual understanding. As an example: ‘I scored at HB level on the YM371 module I took at university, and published a paper in IJTLA. I also did well in a high school debate competition.’ This is hard to evaluate because:

  • We probably don’t know the specific module codes or grading schemes for your university. It’s also hard to judge how meaningful publishing a paper is in an unknown journal without more information about the paper. We do try to look up these kinds of things, but we often can’t find what people are referring to, at least in the time we have to review applications.
  • We don’t have enough context to judge how impressive the debate competition claim is. For example, is this at their local school’s debating club one afternoon? Or was this at a national championship with thousands of the top students around the country? And what is ‘well’ - winning or placing highly, or just scoring a couple of points?

Strike a balance with length. Extremely terse one-line responses often miss important nuance or don’t provide enough context for us to evaluate effectively. Overly long entries burying the key points in fluff and unnecessary detail can make it harder for us to identify the relevant parts of your application. The application form provides guidance on how long we expect answers to be.

Read the questions carefully, and make sure you’re answering what they’re asking for. The descriptions are there to help you!

Put yourself in our shoes. Does your application make a good case that your participation in our course would result in improved governance of AI systems to meaningfully reduce AI risks?

Applying to our course

The last common mistake is not applying at all, or forgetting to do so by the deadline! Now you know how to put your best foot forward, apply to our AI governance course today.

Footnotes

  1. We evaluate applications to our courses based on a number of factors and try to make positive cases for all applications. The mistakes listed in this article were only used for analysis after all decisions were finalised, and not used as criteria for accepting or rejecting people.

    We got the data by classifying rejected applications using a large language model into 7 buckets (including a ‘none of the above’) bucket, based on our experience reviewing the applications, then spot-checked random samples of these to ensure the numbers were accurate.

    The legal basis under the UK GDPR for processing this personal data was our legitimate interests. This processing was for statistical purposes, to improve the user experience of applying to our courses and to promote our courses.

    All the example applications are made up to protect applicant’s privacy, but aim to be representative of the class of applications we have seen.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.