Resources

This is a compilation of many of the resources likely to be useful to people in the AI safety space that we are aware of.

Introductions to AI safety

For a popular audience

We think these are great resources to share with others who might be interested in learning about what AI safety is in a low-cost way. We think these are great to look at before applying to our programme, to get a feel for what the content is going to be about.

 

More academic introductions

These resources are useful for people who want a more comprehensive introduction to AI safety than the previous section. These resources take a variety of different approaches. Some are specifically targeted at machine learning practitioners, which we feel is useful for connecting elements of AI safety with the forefronts of machine learning research.

  • Bibliography of research areas that need further attention according to the Centre for Human Compatible AI, UC Berkeley. On that site, you can set a priority threshold for which materials to show.

  • Unsolved problems in ML Safety – Dan Hendrycks. This paper frames many problems in AI safety in the context of modern machine learning systems. We think it’s a good introduction for machine learning academics hoping to learn more about machine learning problems they could help work on.

  • ML safety scholars’ course – Dan Hendrycks. This course takes a Machine Learning-first approach to AI safety, which forms the basis of a closely-related field Hendrycks terms ‘ML safety’. It may be a good way to learn safety-relevant ML techniques and concepts, at the same time as ML safety concepts.

  • Alignment forum curated sequences

Introductions to ML engineering

Learning to code:

If you’re interested in working on technical AI safety via machine learning, it’s overwhelmingly likely that you’ll use Python. These are our recommended resources for picking up the Python programming language.

 

Basic ML introductions:

If you’re interested in working on technical AI safety, the current AI paradigm dictates that Machine Learning knowledge is essential. The below resources are not safety-relevant, but could be a good place to start to learn more about machine learning.

 

More advanced:

 

ML textbooks:

Podcasts and newsletters

Both technical and policy

  • 80,000 Hours’ AI podcasts – Discussions with prominent researchers on risks from frontier AI. Most episodes are very accessible to people new to the field.
  • AXRP: the AI X-risk Research Podcast – Daniel Filan (UC Berkeley) interviews leading researchers in the field of AI safety. Most episodes are quite technical, but there are a few AI governance researcher interviews.

  • The Inside View podcast – This podcast strikes a middle ground between 80k and AXRP in terms of how technical the content is. It has several interviews with policy-oriented guests.

  • ML Safety Newsletter – by Dan Hendrycks. The latest news and research from the ML community, related to making ML safer.

 

Technical safety focus

 

Strategy & policy focus

  • Import AI – Jack Clark rounds up the latest progress towards advanced AI. This newsletter mostly takes a policy angle, though includes technical advances.

  • EU AI Act Newsletter – Researchers at the Future of Life Institute summarise the latest key developments in the EU AI act, with a lens of general, frontier AI systems.
  • policy.ai. A biweekly newsletter on AI policy by the Centre for Security and Emerging Technology (CSET).

  • Digital Bridge – by Politico. A “weekly transatlantic tech newsletter uncovers the digital relationship between critical power-centers through exclusive insights and breaking news for global technology elites and political influencers.”

  • Jeffrey Ding’s ChinAI – by Jeffrey Ding. “ChinAI bets on the proposition that the people with the most knowledge and insight [on AI development in China] are Chinese people themselves who are sharing their insights in Chinese.”

Funding for AI safety work
  • Long-Term Future Fund (LTFF) – EA Funds

    • Applying to the EA Funds is an easy and flexible process, so we recommend you err on the side of applying if you’re not sure.

    • They have historically funded: up-skilling in a field to prepare for future work; movement-building programs; scholarships, academic teaching buy-outs, and additional funding for academics to free up their time; funding to make existing researchers more effective; direct work in AI; seed money for new organizations; and more

    • If you’re not sure where to apply, we recommend you default to this.
  • Career development and transition funding – Open Philanthropy

    • This program aims to provide support – primarily in the form of funding for graduate study, but also for other types of one-off career capital-building activities – for early-career individuals who want to pursue careers that help improve the long-term future

    • As with the EA Funds, applying is an easy and flexible process.

  • Open Philanthropy Undergraduate Scholarship

    • This program aims to provide support for highly promising and altruistically-minded students who are hoping to start an undergraduate degree at one of the top universities in the USA or UK, and who do not qualify as domestic students at these institutions for the purposes of admission and financial aid.

  • Future of Life Institute – Grants

    • Many different grant opportunities: project proposals; PhD fellowships; post-doctoral fellowships; and for professors to join their AI Existential Safety community

  • Your university or government might fund you to do research with them, especially for research internships or PhDs.

  • The 80,000 Hours jobs board lists other open funding opportunities in AI safety.
 Other resource lists

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.