Governance Course
AI Safety Fundamentals
Applications to join this course and the AI Safety Fundamentals Community are currently open.
See the course details and apply by 25th June 2023.
Key Points
This curriculum is designed to be an efficient way for you to gain foundational knowledge for doing research or policy work on the governance of transformative AI (TAI)—AI with impacts at least as profound as those of the industrial revolution [1]. It covers some research up to 2022 on why TAI governance may be important to work on now, what large-scale risks TAI poses, and which actors will play key roles in steering TAI’s trajectory, as well as what strategic considerations and policy tools may influence how these actors will or should steer. Existing work on these subjects is far from comprehensive, settled, or watertight, but it is hopefully a useful starting point.
This curriculum can be read independently or as part of a discussion-based virtual course.
Course Logistics
Participants are divided into groups of ~4-6 people, matched based on their prior knowledge of large-scale AI risks and governance. Each week (apart from week 0) each group and their discussion facilitator will meet for 1.5 hours to discuss the readings and exercises. The course consists of 8 weeks of readings, plus a final project. After Week 7, participants will have several weeks to work on projects of their choice, to present at the final session.
Some high-level approaches that informed the syllabus design
Focus on transformative AI: Emphasizes especially large-scale potential impacts of future AI systems (see Week 0 for context)
Beginner-friendly: Does not assume significant prior knowledge
Foundational: Aims to give a broad, high-quality overview of the basics
Busy-schedule-compatible: Prioritizes relatively short readings
Problem-first: Emphasizes understanding relevant problems and risk scenarios, including technical basics, for better generating and prioritizing among paths to impact
Pluralistic: Aims to include a range of the (very different) views that are prominent among governance and policy specialists tackling large-scale AI risks
In terms of format, each week has “Core readings” and “Additional Readings.”
The core readings are the core of this curriculum; they are heavily filtered for importance, quality, conciseness, and accessibility.
The additional readings are more optional readings for any reader who is looking for more details on a particular subject. Their relevance, quality, conciseness, and accessibility are more varied.
Among the additional readings, readings in bold are honorable mentions—ones that almost made it into the core readings.
Topics for each week
This syllabus is structured into two parts:
Part I: Anticipating AI’s Impacts—This part dives into broader (not governance-focused) and somewhat technical aspects of AI risks, with the motivating idea that solidly understanding problems is very helpful for both identifying potential solutions and prioritizing among them.
Week 0 (Recommended Background): AI, Machine Learning, and their Potentially Extreme Stakes
Week 1: Introduction and AI Forecasting
Week 2: Technical Challenges of AI Alignment
Week 3: Potential Extreme Risks from AI
Part II: Steering AI’s Impacts—This part dives into how governance decisions can exacerbate or help address AI risks.
Week 4: Strategy and Policy Ideas
Week 5: Key Non-governmental Actors
Week 6: International Competition and Cooperation
Week 7: Career Advice and Opportunities
[1] This definition of TAI is from this article, though the term has varied uses.
Syllabus
Audio versions of most core readings are available on Apple Podcasts, Google Podcasts, Spotify and this RSS feed.