AI Alignment Course

12 week online course, covering a range of technical AI alignment research agendas.

AI systems are rapidly becoming more capable and more general. Despite AI’s potential to radically improve human society, there are still open questions about how we build AI systems that are controllable, aligned with our intentions and interpretable.

You can help develop the field of AI safety by working on answers to these questions.

The AI Alignment course is designed to introduce the key concepts in AI safety and alignment, and will give you space to engage with, evaluate and debate these ideas. You’ll meet others who are excited to help mitigate risks from future AI systems, and explore opportunities for your next steps in the field.

The course is run by BlueDot Impact – a non-profit that supports people to develop the knowledge, skills and connections they need to pursue a high-impact career.

Why do the course

You’ll learn about the foundational arguments.  It is difficult to know where to start when trying to learn about AI safety for the first time. The programme will give you the structure and accountability you need to explore a wide variety of alignment research agendas, and give you a rich conceptual map of the field.

Your learning is facilitated by experts. Your facilitator will help you navigate the course content, develop your own views on each topic, and foster constructive debate between you and fellow participants.

You’ll be learning alongside others. Your cohort will be made up of people who are similarly new to AI safety, but who will bring a wealth of different expertise and perspectives to your discussions. Many participants form long-lasting and meaningful connections, that support them to take their first steps in the field.

You’ll be supported to take your next steps. This could involve doing further work on your end-of-course project, applying for programmes and jobs, or doing further independent study. We maintain relationships with a large network of organisations and will share opportunities with you. Additionally, with your permission, we can help make connections between you and recruiters at top organisations.

You’ll join 2000+ course alumni. We’ve helped thousands of people learn via our AI Safety Fundamentals courses. You’ll gain access to this network, and to many others who join in future rounds of the course.

Who this course is for

We think this course will particularly be able to help you if:

  • You have machine learning experience and are interested in pivoting to technical AI safety research.
  • You have a technical background, such as studying a STEM subject, and are keen to work on technical AI safety research.
  • You are managing or supporting technical AI safety researchers and understanding the alignment landscape would make you more effective in your role.
  • You are a student seriously considering a career in technical AI safety to reduce risk from advanced AI.

If none of these sound like you, but you’re still interested in technical AI safety research we still encourage you to apply. The research field needs people from a range of backgrounds and disciplines, and we can’t capture all of them in this list.

What this course is not

This course might not be right for you if you are looking for:

  • A course to teach you general programming, machine learning or AI skills. Our resources page lists a number of courses and textbooks that can help with this. Note that these skills are not hard prerequisites to taking our AI alignment course.
  • A course that teaches general ML engineers common techniques for how to make systems safer. Instead, this course is for people involved or interested in technical AI alignment research, e.g. investigating novel methods for making AI systems safe.
  • A course that covers all possible AI risks and ethical concerns. Instead, our course primarily focuses on catastrophic risks from future AI systems. That said, many of the methods to target the catastrophic risks can also be applied to support with other areas of AI safety.
  • A course for government policymakers and related stakeholders to learn about AI governance proposals. Our AI Governance course is likely a much better match.

Course Structure

The course comprises 8 weeks of reading and small-group discussions, followed by a 4-week capstone project. The time commitment is around 5 hours per week, so you can engage with the course alongside full-time work or study.

Weeks 1-8 – Learning Phase

This is where you’ll work through the main course curriculum, designed with AI safety experts at OpenAI and the University of Cambridge.

Each week involves 2-3 hours of readings and independent exercises, plus a 2 hour live session (via video call, which we’ll arrange at a time that suits you).

The live sessions are where you work through activities with your cohort of around 5 other participants. These sessions are facilitated by an expert in AI safety, who can help you navigate the field and answer questions.

If accepted onto the course, we’ll ask for your availability so we can find a time slot that suits you (including evening or weekend sessions). There’s flexibility to change sessions in case your availability changes.

Compared to studying the curriculum independently, participants tell us they really value the live cohort sessions as the facilitator helps create an engaging discussion space, the activities are designed to enable effective learning, and you develop deep relationships with highly motivated peers.

Weeks 9-12 – Project Sprint

The project stage is an opportunity for you to put the knowledge and skills you’ve gained during the first 8 weeks into practice. Previously, top projects have led people to get jobs at top AI companies or to directly influence government policy.

It’s up to you what idea you’d like to work on. In case you want suggestions, we’ll have a list of projects that we think would be valuable based on our experience and connections with field experts.

Once you’ve got your idea, we’ll introduce you to a cohort of people working on similar projects. You’ll work on your project independently, and can put as much time as you’d like into it. Each week you’ll review your progress and receive feedback in 1-2 hour facilitated online discussions with your cohort.

At the end of the project stage, you’ll share your project with other people on the course. And of course, you’ll be able to talk about your experience completing this projects when applying for future opportunities.

After submitting a project, you’ll receive a certificate of completion.

Dates

March 2024 course:

June 2024 course:

These dates are our current best estimates as to when we will be running the courses, and are subject to change.

We expect to run a version of the course roughly every 3-4 months.

Application process

Apply through our online application form, and you should receive an email confirmation a few minutes after you send your application.

We make most application decisions within 1-2 weeks after the application deadline closes. Do keep an eye on your emails during this time, as if accepted we’ll need you to confirm your place. All legitimate emails regarding the course will come from @bluedot.org.

If you have any questions about applying for the course, do contact us.

 

Application tips

We take a holistic approach to evaluating applications that consider a number of factors we think are important to getting the most from the course.

That said, there are some general tips that can help ensure you’re putting your best foot forward:

  • When talking about projects you’ve done, focus on your specific contributions to those projects, rather than explaining details about the project itself.
  • If you include any links in your application, make sure the link is correct and can be accessed publicly. Opening the link in an Incognito or Private Browsing window is a good way to test this. It’s also often helpful to give us a summary of the highlights or takeaways we should get from what you’ve linked us to.
  • Before you hit ‘Submit’, review your answers and double check they are answering the question asked. Additionally, make sure your contact information (especially your email address) is correct.

Requirements

  • Availability 5+ hours per week to study AI alignment
  • A reliable internet connection and webcam (built-in is fine) to join video calls
  • English language skills sufficient to constructively engage in live sessions on technical AI topics

Optional payment

There is no mandatory payment for this course. If you would like to, you will have the option at the end of the course to pay an amount that you are comfortable with and that you feel reflects the value that the course has brought to you.

BlueDot Impact Ltd, the organisation that runs this course, is a non-profit based in the UK and is entirely philanthropically funded. This course costs us roughly £600 per participant to run, and any payment you make would be used to subsidise places for future participants on our courses.

Running independent versions of the course

Friends, workplace groups, student societies and other local organisations are welcome to run versions of our courses. Provided you follow our guidance, you can use the public curriculum and these session plans.

Any other questions?

If you’re not sure whether to apply, we recommend that you put in an application. If you have any other questions do contact us!

Apply now View curriculum Facilitate

Endorsements & testimonials

Sarah Cogan
Software Engineer at Google DeepMind
I participated in the AISF Alignment Course last year and consider it to be the single most useful step I've taken in my career so far. I cannot recommend the program strongly enough.
Sarah Cogan
Software Engineer at Google DeepMind
Jun Shern Chan
Research Contractor at OpenAI
The AISF Alignment Course was my first real contact with the alignment problem, and I got a lot out of it: I really enjoyed the discussions+content, but more than that I was able to get connected with many people whom I later started working with, enabling me to leave my previous robotics job and transition to full-time alignment research.
Jun Shern Chan
Research Contractor at OpenAI
t2
Marlene Staib
Research Engineer at Google DeepMind
The best thing about the course for me was the community - on Slack and in our discussion groups. It makes it easier to feel part of something and commit to the ideas we were exploring.
Marlene Staib
Research Engineer at Google DeepMind

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.