Guide Guide Mobile

Working in technical AI Safety

This guide provides our advice for pursuing a career in technical AI safety and alignment.

Note, there are many pathways into having an impact on AI Safety. We encourage you to think about other opportunities that aren’t on this list – many of which will not be directly focused on AI Safety – that could help you contribute to AI Safety in the long-run. (And note that we are still in the process of adding more opportunities to this list).

More information, feedback & suggestions

We think many of these opportunities will benefit from knowledge of the field equivalent to our AI safety fundamentals programme curriculum, but encourage you to apply regardless of whether you’ve had the chance to do the course or not.

We also recommend and take inspiration from 80,000 Hours’ jobs board. Check that out for a wider range of opportunities.

If you’d like to suggest any opportunities you think we’ve missed, let us know. We’d be happy to consider it, though note we have a selective criteria for what we choose to display.

The opportunities board is a new feature. Please give us your feedback here if you have thoughts on how it could be improved or serve you better.

Sign up to our newsletter to be notified when new opportunities are added.

Types of work you can do in technical AI safety

There are many ways to contribute to the technical development of safe AI:

  • Research; making progress on research agenda that hope to devise algorithms and training techniques that product AI and/or ML systems that are aligned with human intentions, are more controllable, or are more transparent.
  • Engineering; implementing alignment strategies in real systems involves efforts from ML engineers, software engineers, infrastructure engineers, and other technical roles you need in a modern tech company.
  • Support work; e.g. recruitment and company operations at fast-growing organisations working on building aligned artificial intelligence. We don’t cover advice for this in our guide yet.
  • Other roles; as the field grows, roles in alignment will likely mirror roles in the wider tech-company ecosystem; for example, product development, design, and technical management. There are also likely to be other opportunities that already exist today, which we haven’t thought of in this list.

Read our in-depth guide to working on technical alignment

Our in-depth guide thoroughly covers the range of technical work you could do such as research and engineering, and how to get started with them.

We think reading each of these pieces carefully and in-order will help you make a serious plan to contribute to the alignment problem.

We don’t currently cover support work very much in our writing, however this could involve recruitment and operations which can often require good knowledge of alignment, to office management and other less-alignment-specialist operational work.

Read our in-depth guide

Advice for pursuing technical AI safety

Developing your own views on alignment

If we’re going to make progress on the research agendas that matter most for reducing risk from misaligned AI, it’s important that the people doing the work are prioritising the projects that are most likely to be on the path to reducing risk from misaligned AI.

If you want to work in alignment research, it’s important that you develop your own views on the mechanics of alignment to help you make that prioritisation call when the time comes.

This piece covers some exercises you can do to try to develop your own views. The discussion in that post isn’t necessarily aimed at people aiming to become research leads; for that, you’ll likely need to dive deeper.

Rohin Shah (DeepMind) covers the arguments that research leads need deeper views in his comment on that piece, as well as on his website (under the ‘Research’ heading).

Personalised 1-1 career advice

We recommend checking out 80,000 Hours, an organisation that researches and advises on the most impactful careers, including working on AI safety. They have previously delivered advice to past iterations of the AI safety fundamentals programme.

You can sign up for 1-1 careers advice here, and we strongly recommend looking at their career planning resource in your own time.

Productivity

Improving productivity is a great way to have more impact. Start with this series of posts called Peak Behind the Curtain, which explores some of the daily habits of people who work on AI safety full time (and some other people, too!). If that sounds interesting, consider finding a productivity coach for yourself.

Other valuable advice

View Richard Ngo’s careers guide. We mostly view this as a briefer and shallower introduction than our in-depth guide above, but may be interesting to you to read another perspective.

Read our in-depth guide

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.