AI Safety Fundamentals – BlueDot Impact

AI Safety

Fundamentals

Start your AI safety journey with our Future of AI course, designed for people with no technical background — just bring your curiosity about how AI will reshape our world.

What is AI Safety?

AI safety focuses on developing technologies and governance interventions to prevent harms caused by AI systems.

Artificial intelligence could be one of the most impactful technologies developed this century. However, ensuring these systems are safe is an open problem, which encompasses a wide range of AI alignment, governance and ethics challenges.

Tackling these challenges will require a concerted effort from researchers, policymakers and many others in the decades to come.

We run courses that give individuals the knowledge, skills and connections to contribute to this effort.

Our current courses primarily focus on preventing catastrophic risks arising from future AI systems. We focus on catastrophic risks because we think they are neglected relative to their scale.

Learn more

Build your foundations

Join a 2,000+ strong community of people who’ve done our courses on AI Safety.

We recommend most people start with our free, self-paced online course designed for people with no technical background. No jargon, no coding, no prerequisites — just bring your curiosity about how AI will reshape our world.

Future of AI Course

A self-paced, 2-hour course designed for people with no technical background to learn how AI will reshape our world. No application required!

Start learning

Our graduates work at

Testimonials

Jun Shern Chan
Research Contractor at OpenAI
The AISF Alignment Course was my first real contact with the alignment problem, and I got a lot out of it: I really enjoyed the discussions+content, but more than that I was able to get connected with many people whom I later started working with, enabling me to leave my previous robotics job and transition to full-time alignment research.
Jun Shern Chan
Research Contractor at OpenAI
Sarah Cogan
Software Engineer at Google DeepMind
I participated in the AISF Alignment Course last year and consider it to be the single most useful step I've taken in my career so far. I cannot recommend the program strongly enough.
Sarah Cogan
Software Engineer at Google DeepMind
Kendrea Beers
Horizon Junior Fellow at the Center for Security and Emerging Technology
This was the most positively impactful course I’ve ever taken (unless you count the high school class in which I got to know my husband!), as it gave me the background to engage with the AI safety and governance communities. I don’t know how I would have gotten up to speed otherwise, and it opened the door to pretty much everything I’ve done professionally for the past couple years.
Kendrea Beers
Horizon Junior Fellow at the Center for Security and Emerging Technology
Matthew Bradbury
Senior AI Risk Analyst in the UK Government
This course was the first step in my career in AI safety. The BlueDot course allowed me to bridge the gap between my previous career as an economist to now working in the UK Government AI Directorate. The course provided me with a great introduction to the field, and allowed me to meet some great people with whom I am still friends. I'd recommend this course to anyone!
Matthew Bradbury
Senior AI Risk Analyst in the UK Government
Alexandra Souly
Technical Staff at the UK AI Safety Institute
Thanks to this course, I became seriously interested in pursuing AI Safety, which prompted a career change. With the knowledge I gained from the course, I secured funding that allowed me to quit my job and return to university to upskill in AI. Through the community, I discovered other opportunities to learn about and contribute to AI safety, such as participating in MLAB and securing an internship at CHAI.
Alexandra Souly
Technical Staff at the UK AI Safety Institute
Michael Aird
Advisor at the Institute for AI Policy and Strategy
This is probably the best public reading list on AI Governance. It is the public list I most often recommend for learning about AI governance, including to new staff on my research team.
Michael Aird
Advisor at the Institute for AI Policy and Strategy
t2
Marlene Staib
Research Engineer at Google DeepMind
The best thing about the course for me was the community - on Slack and in our discussion groups. It makes it easier to feel part of something and commit to the ideas we were exploring.
Marlene Staib
Research Engineer at Google DeepMind

Projects

Here are the top projects from students on our AI safety courses.

 

Our students work on these projects part-time for 4 weeks, applying their learnings from the course to take their next steps in AI safety. See all the projects here.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.