AI Safety

Fundamentals

 

Explore proposals to positively shape advanced AI through our course, designed with AI safety experts at OpenAI and the University of Cambridge.

What is AI Safety?

New technologies bring new opportunities for progress, but often come with novel risks.

We believe advanced artificial intelligence (AI) could be one of the most impactful technologies developed this century. However, there are many open problems in present-day machine learning (ML) systems which could be exacerbated as their capabilities become more advanced, and we do not yet have the governance infrastructure in place to ensure these systems are developed safely and aren’t used maliciously.

Developing and implementing solutions to these problems ahead of time will require a concerted effort from researchers, policymakers and others in the decades to come. We run courses to support people to learn about these potential harms, the interventions currently being proposed, and how they could use their skills and career to mitigate these risks.

Our courses on AI Safety

Please register your interest to be notified when future dates are finalised.

By taking part in our courses, you’ll be joining a community of others to learn about AI Safety and discover future opportunities. Courses are 12-weeks long, part-time, virtual and completely free. See the curricula for more details. You can also work through the course resources in your own time, before participating in the next round of the course.

b2
Alignment Course

Applications for early 2024 are now open.

Apply now
b1
Governance Course

We will notify you when the next round of applications opens.

Register interest

Courses and Community

Over 2,000 people have taken part in the AI Safety Fundamentals courses and joined the community.

A woman smiling, stood at a whiteboard.

Endorsements & testimonials

t2
Marlene Staib
Research Engineer, Google DeepMind
“The best thing about it for me was the community - on Slack and in our discussion groups. It makes it easier to feel part of something and commit to the ideas in the course.”
Marlene Staib
Research Engineer, Google DeepMind
t3
Michael Aird
Acting Co-Director, Institute for AI Policy and Strategy
“This is probably the best public reading list on AI Governance. It is the public list I most often recommend for learning about AI governance, including to new staff on my research team.”
Michael Aird
Acting Co-Director, Institute for AI Policy and Strategy
Jenny Xiao
Jenny Xiao
Affiliate, Centre for the Governance of AI
“The Governance Course is probably the easiest way to kick-start a career in AI Governance. You gain an overview of the field and a community of like-minded peers.”
Jenny Xiao
Affiliate, Centre for the Governance of AI
T1
Buck Schlegeris
CTO, Redwood Research
“When I speak to Alignment Course graduates, I find I can safely assume knowledge and have a more productive conversation with them.”
Buck Schlegeris
CTO, Redwood Research
Sarah Cogan
Software Engineer
"The Alignment Course was incredibly helpful to talk through my views and objections with peers who were friendly and thoughtful; I had much better understanding of AI safety concepts afterwards."
Sarah Cogan
Software Engineer

Apply here

If you are already well informed about AI Safety, you can join our network of facilitators and help us to run the programme! Find out more about facilitating.

Register interest

Contribute to mitigating risks from AI

We keep you up to date with the latest jobs, fellowships and training programmes in AI safety. You can also sign up to our newsletter.

See opportunities

Organisations where graduates work

© 2023. BlueDot Impact is funded by Open Philanthropy, and is a project of the Effective Ventures group, the umbrella term for Effective Ventures Foundation (England and Wales registered charity number 1149828, registered company number 07962181, and also a Netherlands registered tax-deductible entity ANBI 825776867) and Effective Ventures Foundation USA, Inc. (a section 501(c)(3) public charity in the USA, EIN 47-1988398).

We’ve just updated our websites. If you find a bug, please contact us.

Designed by And—Now

Designed by And—Now

We use essential cookies on our website to provide a richer experience. By accepting, you agree to our use of such cookies. Cookie Policy.