AI Alignment Fast-Track

A 5-day online course designed to accelerate your understanding of technical AI safety research.

AI systems are rapidly becoming more capable and more general. Despite AI’s potential to radically improve human society, there are still open questions about how we build AI systems that are controllable, aligned with our intentions and interpretable.

You can help develop the field of AI safety by working on answers to these questions.

The AI Alignment Fast-Track is an intensive 5-day course designed to accelerate your understanding of key concepts in AI safety and alignment. This course will give you space to engage with, evaluate and debate these ideas. You’ll meet others who are excited to help mitigate risks from future AI systems, and explore opportunities for your next steps in the field.

The course primarily focuses on catastrophic risks from advanced AI systems. We think reducing catastrophic AI risk is one of the most important and neglected problems in the world. This course aims to give you the context and connections so you can work on tackling this problem!

We’re accepting applications on a rolling basis and have 20 spots available for our 9-13 December 2024 course. Apply before 5 December 2024 to secure your place!

Why do the course

You’ll learn about the foundational arguments.  It is difficult to know where to start when trying to learn about AI safety for the first time. The programme will give you the structure and accountability you need to explore a wide variety of alignment research agendas, and give you a rich conceptual map of the field.

Your learning is facilitated by experts. Your facilitator will help you navigate the course content, develop your own views on each topic, and foster constructive debate between you and fellow participants.

You’ll be learning alongside others. Your cohort will be made up of people who are similarly new to AI safety, but who will bring a wealth of different expertise and perspectives to your discussions. Many participants form long-lasting and meaningful connections, that support them to take their first steps in the field.

You’ll be supported to take your next steps. This could involve applying for programmes and jobs, or doing further independent study. We maintain relationships with a large network of organisations and will share opportunities with you. Additionally, with your permission, we can help make connections between you and recruiters at top organisations.

You’ll join 2000+ course alumni. We’ve helped thousands of people learn via our AI Safety Fundamentals courses. You’ll gain access to this network, and to many others who join in future rounds of the course.

Who this course is for

We think this course will particularly be able to help you if:

  • Have a foundational understanding of machine learning concepts, such as neural networks, gradient descent, and AI development drivers (algorithms, data, compute). If not, you might want to apply to our Intro to Transformative AI course instead.
  • You have machine learning experience and are interested in pivoting to technical AI safety research.
  • You are managing or supporting technical AI safety researchers and understanding the alignment landscape would make you more effective in your role.
  • You are a student seriously considering a career in technical AI safety to reduce risk from advanced AI.

If none of these sound like you, but you’re still interested in technical AI safety research we still encourage you to apply. The research field needs people from a range of backgrounds and disciplines, and we can’t capture all of them in this list.

What this course is not

This course might not be right for you if you are looking for:

  • A course to teach you general programming, machine learning or AI skills. Our resources page lists a number of courses and textbooks that can help with this. Note that these skills are not hard prerequisites to taking our AI alignment course.
  • A course that teaches general ML engineers common techniques for how to make systems safer. Instead, this course is for people involved or interested in technical AI alignment research, e.g. investigating novel methods for making AI systems safe.
  • A course that covers all possible AI risks and ethical concerns. Instead, our course primarily focuses on catastrophic risks from future AI systems. That said, many of the methods to target the catastrophic risks can also be applied to support with other areas of AI safety.
  • A course for government policymakers and related stakeholders to learn about AI governance proposals. Our AI Governance course is likely a much better match.

Course Structure

This course runs for 5 consecutive days with curated readings and facilitated discussions. The time commitment for the week is around 15 hours, so you can engage with the course alongside full-time work or study.

Monday: Icebreaker

Meet with your cohort for 1 hour to set expectations for the upcoming days. No preparation is required for this session.

Tuesday-Friday: Discussion sessions

This is where you’ll work through the course curriculum, which is an adapted version of our AI Safety Fundamentals: Alignment course curriculum – developed with AI safety experts from the University of Cambridge and OpenAI.

Each session involves 2 hours of readings and independent exercises, plus a 1.5 hour live session (via video call, which we’ll arrange at a time that suits you).

The live sessions are where you work through activities with your cohort of around 5 other participants. These sessions are facilitated by an expert in AI safety, who can help you navigate the field and answer questions.

If accepted onto the course, we’ll ask for your availability so we can find a time slot that suits you (including evening or weekend sessions). There’s flexibility to change sessions in case your availability changes.

Compared to studying the curriculum independently, participants tell us they really value the live cohort sessions as the facilitator helps create an engaging discussion space, the activities are designed to enable effective learning, and you develop deep relationships with highly motivated peers.

Dates

December 2024 course

  • 5 Dec: Application deadline
  • 6 Dec: Final application decisions
  • 9-13 Dec: AI Alignment Fast-Track course

Application process

Apply through our online application form, and you should receive an email confirmation a few minutes after you send your application.

We’re accepting applications on a rolling basis and have 20 spots available.

We will make application decisions by the Friday before the course starts. Do keep an eye on your emails during this time, as if accepted we’ll need you to confirm your place. All legitimate emails regarding the course will come from @bluedot.org.

If you have any questions about applying for the course, do contact us.

 

Application tips

We take a holistic approach to evaluating applications that consider a number of factors we think are important to getting the most from the course.

That said, there are some general tips that can help ensure you’re putting your best foot forward:

  • When talking about projects you’ve done, focus on your specific contributions to those projects, rather than explaining details about the project itself.
  • If you include any links in your application, make sure the link is correct and can be accessed publicly. Opening the link in an Incognito or Private Browsing window is a good way to test this. It’s also often helpful to give us a summary of the highlights or takeaways we should get from what you’ve linked us to.
  • Before you hit ‘Submit’, review your answers and double check they are answering the question asked. Additionally, make sure your contact information (especially your email address) is correct.

You can find more guidance here.

Requirements

  • Completed the Intro to Transformative AI course or equivalent
  • Availability 15 hours throughout the week
  • A reliable internet connection and webcam (built-in is fine) to join video calls
  • English language skills sufficient to constructively engage in live sessions on technical AI topics

Earning a certificate

If you attend all five sessions and complete the required readings and exercises, you’ll earn a certificate of completion.

Optional payment

There is no mandatory payment for this course. If you would like to, you will have the option at the end of the course to pay an amount that you are comfortable with and that you feel reflects the value that the course has brought to you.

BlueDot Impact Ltd, the organisation that runs this course, is a non-profit based in the UK and is entirely philanthropically funded. This course costs us roughly £600 per participant to run, and any payment you make would be used to subsidise places for future participants on our courses.

Running independent versions of the course

The official course is run by BlueDot Impact – a non-profit that supports people to develop the knowledge, skills and connections they need to pursue a high-impact career.

Friends, workplace groups, student societies and other local organisations are welcome to run versions of our courses. Provided you follow our guidance, you can use the public curriculum and these session plans.

Any other questions?

If you’re not sure whether to apply, we recommend that you put in an application. If you have any other questions do contact us!

Apply now View curriculum Facilitate

Endorsements & testimonials

Sarah Cogan
Software Engineer at Google DeepMind
I participated in the AISF Alignment Course last year and consider it to be the single most useful step I've taken in my career so far. I cannot recommend the program strongly enough.
Sarah Cogan
Software Engineer at Google DeepMind
Jun Shern Chan
Research Contractor at OpenAI
The AISF Alignment Course was my first real contact with the alignment problem, and I got a lot out of it: I really enjoyed the discussions+content, but more than that I was able to get connected with many people whom I later started working with, enabling me to leave my previous robotics job and transition to full-time alignment research.
Jun Shern Chan
Research Contractor at OpenAI
Kendrea Beers
Horizon Junior Fellow at the Center for Security and Emerging Technology
This was the most positively impactful course I’ve ever taken (unless you count the high school class in which I got to know my husband!), as it gave me the background to engage with the AI safety and governance communities. I don’t know how I would have gotten up to speed otherwise, and it opened the door to pretty much everything I’ve done professionally for the past couple years.
Kendrea Beers
Horizon Junior Fellow at the Center for Security and Emerging Technology
t2
Marlene Staib
Research Engineer at Google DeepMind
The best thing about the course for me was the community - on Slack and in our discussion groups. It makes it easier to feel part of something and commit to the ideas we were exploring.
Marlene Staib
Research Engineer at Google DeepMind

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.