AI Safety

Fundamentals

Courses designed with AI Safety experts. Apply to our AI Governance course before 5 Jan.

What is AI Safety?

AI safety focuses on developing technologies and governance interventions to prevent harms caused by AI systems.

Artificial intelligence could be one of the most impactful technologies developed this century. However, ensuring these systems are safe is an open problem, which encompasses a wide range of AI alignment, governance and ethics challenges.

Tackling these challenges will require a concerted effort from researchers, policymakers and many others in the decades to come.

We run courses that give individuals the knowledge, skills and connections to contribute to this effort. Our current courses primarily focus on helping researchers and policymakers prevent catastrophic risks arising from future AI systems.

Learn more

Our graduates work at

Our courses on AI Safety

Join a 2,000+ strong community of people who’ve done our courses on AI Safety.

Our online courses are 12-weeks long, and can be done alongside work or study in about 5 hours per week. You’ll follow an up-to-date curriculum designed by world leading experts, supported by a trained facilitator in live small group classes.

Our graduates have gone on to lead safety teams at top AI labs, become senior government policymakers working on AI, or found startups working on AI safety.

Alignment Course

For people with a technical background interested in AI alignment research. Explores research agendas for aligning AI systems with intended goals.

Learn more
b1
Governance Course

For policymakers and similar stakeholders interested in AI governance mechanisms. Explores policy levers for steering the future of AI development.

Learn more

Testimonials

Jun Shern Chan
Research Contractor at OpenAI
The AISF Alignment Course was my first real contact with the alignment problem, and I got a lot out of it: I really enjoyed the discussions+content, but more than that I was able to get connected with many people whom I later started working with, enabling me to leave my previous robotics job and transition to full-time alignment research.
Jun Shern Chan
Research Contractor at OpenAI
Sarah Cogan
Software Engineer at Google DeepMind
I participated in the AISF Alignment Course last year and consider it to be the single most useful step I've taken in my career so far. I cannot recommend the program strongly enough.
Sarah Cogan
Software Engineer at Google DeepMind
Kendrea Beers
Horizon Junior Fellow at the Center for Security and Emerging Technology
This was the most positively impactful course I’ve ever taken (unless you count the high school class in which I got to know my husband!), as it gave me the background to engage with the AI safety and governance communities. I don’t know how I would have gotten up to speed otherwise, and it opened the door to pretty much everything I’ve done professionally for the past couple years.
Kendrea Beers
Horizon Junior Fellow at the Center for Security and Emerging Technology
Matthew Bradbury
Senior AI Risk Analyst in the UK Government
This course was the first step in my career in AI safety. The BlueDot course allowed me to bridge the gap between my previous career as an economist to now working in the UK Government AI Directorate. The course provided me with a great introduction to the field, and allowed me to meet some great people with whom I am still friends. I'd recommend this course to anyone!
Matthew Bradbury
Senior AI Risk Analyst in the UK Government
Alexandra Souly
Technical Staff at the UK AI Safety Institute
Thanks to this course, I became seriously interested in pursuing AI Safety, which prompted a career change. With the knowledge I gained from the course, I secured funding that allowed me to quit my job and return to university to upskill in AI. Through the community, I discovered other opportunities to learn about and contribute to AI safety, such as participating in MLAB and securing an internship at CHAI.
Alexandra Souly
Technical Staff at the UK AI Safety Institute
Michael Aird
Advisor at the Institute for AI Policy and Strategy
This is probably the best public reading list on AI Governance. It is the public list I most often recommend for learning about AI governance, including to new staff on my research team.
Michael Aird
Advisor at the Institute for AI Policy and Strategy
t2
Marlene Staib
Research Engineer at Google DeepMind
The best thing about the course for me was the community - on Slack and in our discussion groups. It makes it easier to feel part of something and commit to the ideas we were exploring.
Marlene Staib
Research Engineer at Google DeepMind

Projects

Here are the top projects from students on our AI safety courses.

 

Our students work on these projects part-time for 4 weeks, applying their learnings from the course to take their next steps in AI safety. See all the projects here.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.