Beyond the Intro to Transformative AI course: Taking action in AI Safety

By Li-Lian Ang (Published on November 28, 2024)

During the Intro to Transformative AI course, you’ve witnessed the staggering progress of AI. From 2021, when experts believed that AI was nowhere near able to perform human-level tasks like:

  • Understand a story and answer questions about it
  • Write interesting stories
  • Perform human-level automated translation

In 2024, AI systems have now leapfrogged, outperforming humans in Math Olympiads, automating scientific discovery and generating realistic podcasts that are indistinguishable from humans.

AI systems today have beaten humans on all benchmarks at an incredibly fast pace.

You’ve explored the full spectrum of risks these systems pose, from present-day harms to existential threats, and witnessed how AI's benefits will scale with its risks.

  • Many leading researchers rank advanced AI's risks alongside nuclear war and pandemics.
  • In October 2024, Anthropic sounded the alarm for governments to “urgently take action on AI policy in the next eighteen months.”
  • OpenAI's CEO Sam Altman estimates we will have artificial general intelligence AI by 2025.

The trajectory is both exciting and frightening, yet there's still significant uncertainty about how to protect humanity from advanced AI systems. We need more capable, motivated people like you to help find a solution.

Now that you understand the risks of AI, you can begin to explore potential solutions and find where you can make an impact. We offer two complementary tracks to help you explore AI safety solutions more deeply.

While these aren't the only paths forward, they represent two crucial approaches to the challenge.[1] In fact, some of the most valuable work happens at the intersection of these tracks—people with both technical and policy expertise are highly sought after in the field.

Our course graduates have gone on to work in impactful roles, such as:

  • AI Safety teams at frontier AI labs like OpenAI, Anthorpic and Google DeepMind
  • Model evaluation organisations like METR
  • Policymakers at leading government institutions like the UK AI Safety Institute, the US AI Safety Insititute and the EU Commission
  • Policy researchers at leading think tanks like the Center for the Governance of AI and the Center for the Study of Existential Risk

AI Alignment

This track focuses on the technical challenge of building AI systems that reliably do what humanity intends. You'll explore proposals to ensure frontier AI models are developed responsibly with proper safeguards against potential catastrophic risks.

You might be particularly suited for this track if:

  • You have a strong ML engineering background (professional or equivalent skills)
  • You're currently working on frontier AI systems
  • You're managing or supporting technical AI safety researchers
  • You have a technical background and want to pivot into AI safety

People on this track often go on to work at:

  • Safety teams at frontier AI labs (e.g. Anthropic, OpenAI, Google DeepMind)
  • Model evaluation organisations (e.g. METR)
  • Technical policy roles
  • Building technical governance solutions
  • Founding organisations for model evaluation, red-teaming, policy research, etc.

AI Governance

This track approaches AI safety from a policy perspective, examining a range of policy levers for steering AI development, including regulation, corporate governance and international governance. The Montreal Protocol offers a powerful example: when companies had no profit incentive to stop using CFCs that were destroying Earth's ozone layer — which protects all life from deadly radiation— governments worldwide came together to ban them. Similarly, we need coordinated policy action to ensure AI development remains beneficial for all, even when it conflicts with market pressures.

You might be particularly well-suited for this track if:

  • You're currently working in AI policy
  • You're a policy researcher interested in pivoting to AI governance
  • You have technical expertise you'd like to apply to policy work

People on this track often go on to work at:

  • AI policy-focused roles (e.g. AI safety institutes)
  • AI policy think tanks producing research for policymakers (e.g. IAPS)
  • Advocacy and social interest groups for safe AI deployment
  • Technical policy implementation teams

Ready to take action?

Here are some immediate next steps you can take:

Join our advanced AI safety courses

Start learning today

There’s no reason you have to wait for our courses to start learning!

  • Self-study our curriculum
  • Form a study group in on the AI Safety Fundamentals Slack. Slack Josh Landes for access to the community!
  • Run an independent version of our courses. We wrote this guide to help you.

We strongly encourage you to leverage the Slack community for feedback on your project and advice on contributing to the field.

The field of AI safety needs people like you who understand the urgency and complexity of the challenge. Whether you approach it from a technical, policy or your own unique angle, your contribution could help ensure that one of the most powerful technologies humanity has ever created remains beneficial to all.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.