What is AI Safety?
AI safety focuses on developing technologies and governance interventions to prevent harms caused by AI systems.
Artificial intelligence could be one of the most impactful technologies developed this century. However, ensuring these systems are safe is an open problem, which encompasses a wide range of AI alignment, governance and ethics challenges.
Tackling these challenges will require a concerted effort from researchers, policymakers and many others in the decades to come.
We run courses that give individuals the knowledge, skills and connections to contribute to this effort. Our current courses primarily focus on helping researchers and policymakers prevent catastrophic risks arising from future AI systems.