What can I do to stop the nanobots from killing us? An analysis of career pathways in AI governance.
Background and Motivations
I care about reducing existential risk and safeguarding the long-term future.
These motivations have influenced every career decision I have made. All my work experience has been in environmental policy, as I had until recently considered climate change the greatest threat to future flourishing. After graduating university in 2022, I enrolled in the Green Corps program, an intense year-long fellowship which saw me crisscrossing the country to lead environmental campaigns.
I have more recently come to view misaligned AI as an even greater threat, and, especially considering how few people are employed in AI governance, believe I might make an outsized impact in this space.
Long-Term Goals:
Often, career exercises ask what kind of impact one would like to have in 20 years. This question is tricky to answer, since I have no idea what the world will look like. We could all have doomed ourselves by then, or we could have steered away from doom, rendering further contributions unnecessary.
I’ll attempt to answer a more specific question: “What impact would I want to have in an in between scenario, where we are neither doomed nor worry-free? How can I best increase the probability of a flourishing future?”
One consideration involves my skepticism about the positive potential of superintelligent AI. Many in the AI safety community acknowledge risks from misaligned AI but seem to believe that, if we could only align superintelligence with human values, this could prove a boon to mankind. I disagree. While misaligned superintelligence killing everyone is obviously catastrophic, even aligned superintelligence could prove disastrous. Such an entity could seize control of global institutions, disempower human decision-makers, and shepherd the future development of humanity “for our own good.” We could become a domesticated species, alive and happy but with our trajectory determined by a utility-maximizing agent beyond our control. Since I value human self-determination, I consider this outcome unacceptable.
This perspective influences my governance priorities. Different approaches to governance seem to fall along a spectrum. One extreme argues that we should embrace AGI as inevitable and do everything possible to safeguard its development; another, that the genie is not yet out of the bottle and that we should endeavor to prevent AGI from ever being built. I find myself near the second camp and am attracted to such proposals as a global moratorium on frontier AI.
I would therefore express my desired impact as: hoping to do whatever possible to prevent the emergence of AGI.
Career Capital, Interests, and Pathways:
How might I hope to achieve this impact?
My campaign work thus far has strengthened my skills in management, operations, organizing, communications, coalition-building, and advocacy. Given this career capital, along with my interests, I am considering three broad pathways in which I might achieve success:
Pathway 1: Policy research and writing. This could include working to characterize policy proposals, resolve existing uncertainties, and draft concrete legislation.
Personal Fit: The idea of working in a collaborative academic environment appeals to me. I am adept at synthesizing information and conveying complex ideas to non-technical audiences. I might see myself as a research generalist (working on an array of policy proposals) rather than a specialist (gaining expertise in a super-narrow field) but would be open to multiple roles.
Disadvantages: I have little experience in think-tank-adjacent roles (I authored one report on renewable energy in 2021, but that’s about it). I also have a weak computer-science background and would be poorly-suited to highly technical research, though I could more easily contribute to policy-heavy research.
Organizations to which I have applied:
- RAND Corporation Technology and Security Policy Fellowship
- CAIDP AI Policy Clinic
- GovAI - 2024 Summer Research Fellowship
- Astra Fellowship
- Horizon Institute for Public Service
Pathway 2: Policy advocacy. This could mean meeting with lawmakers to advance policies, building grassroots support to pressure decision-makers, or educating the populace about AI risk.
Personal Fit: I have achieved success in campaigns convincing two state legislatures (Massachusetts and California) to pass renewable-energy and pesticide-restriction bills, respectively. I excel at conveying complex policy concepts to lawmakers, organizing coalitions of support for policy changes, and crafting detailed plans toward achieving campaign goals.
Disadvantages: I would fear inadvertently advocating for policies that do more harm than good. Take the hotly-debated example of a pause on frontier AI, which might give humanity more time but could also backfire, e.g. by incentivizing development in less safety-conscious nations. If I’m not careful, I might promote policies with negative externalities and make humanity worse off.
Also, while my skillset is well-suited to this work, I don’t enjoy certain aspects of it. I am introverted by nature and am disinclined toward excessive social interaction. I’m therefore less attracted to grassroots advocacy but would be more open to grasstops and writing-focused advocacy.
Organizations to which I have applied:
- Center for AI Policy
- Tarbell Fellowship
Pathway 3: Operations and field-building. This work would help organizations run smoothly and expand their impact as much as possible, and would “build the field” of AI safety, e.g. by recruiting promising university students to pursue safety careers or by expanding infrastructure available for startup organizations.
Personal Fit: I have operations experience, most notably when I directed Environment California’s summer canvass, managed 30 full-time staff, and spearheaded the logistics that come with a team that size. Much of my job also involved staff recruitment and training, leading me to believe I would be effective at recruiting people to work in AI safety.
Disadvantages: My main concern is that this work could prove boring. Operations would be less intellectually stimulating than research, and I might lose interest and feel disconnected from the organization’s overall mission. A few factors could mitigate these worries: if, in an operations role, I remained surrounded by a community of researchers rather than feeling isolated; or if, in a field-building role, I engaged with promising individuals and organizations keeping me motivated.
Organizations to which I have applied:
- Open Philanthropy’s Global Catastrophic Risks Team (various operations roles) • BERI MATS (various operations roles)
- Centre for Effective Altruism (University Groups Coordinator)
Career Reflections from EA Global:
Recently, I met with several people at the EAGx Virtual Conference working in governance research, advocacy, and operations. I have several takeaways from these conversations:
- I am disinclined to pursue operations-type work after speaking with someone who does operations at a large EA-affiliated organization. One common pitfall EAs make, according to him, is to pursue operations, without enjoying it, to “open the door” to other opportunities. The issue is that operations don’t actually open many doors to non operations opportunities— it’s uncommon, for instance, to go from an operations generalist to a researcher at a given organization. Moreover, operations people don’t communicate that much with research or advocacy people within (larger) organizations, so it can be easy to lose sight of the organization’s mission and become demotivated.
- Much more needs to be done in policy advocacy. I spoke with two people active in advocacy, one from a brand-new startup and one from a group that’s been around for a few years. In both cases, they were actively fleshing out policy proposals and developing strategies to convey their messages to Congress. I got the impression that there remain several unexplored avenues in the AI policy space and that there is a need for more cognitive labor to explore these avenues.
- Research, writing, and advocacy are not as distinct as I had assumed. One person I spoke to works for an organization in the process of drafting legislative proposals and conveying these to lawmakers. This organization only has a few employees, and I got the impression of lots of cross-disciplinary collaboration between them. (This might be less true in larger organizations.) I might still specialize in a specific pathway, but I wouldn’t be working in a vacuum.
- Independent research has considerable potential. I had thought of research as occurring primarily within established think-tanks, but talking to governance researchers convinced me that independent research can be a key stepping-stone. I might, for instance, explore an under-characterized governance mechanism, gain feedback from others in the AI safety community, and, if successful, showcase my contributions to the research landscape to secure a longer-term position.
My Pathways, Ranked:
Based on these considerations, I have developed a method ranking possible pathways. Multiple variables are assessed on a scale of 1-10:
- Well-suitedness (how well-suited am I for this work?)
- Enjoyability (how much would I enjoy it?)
- Positive Impact (how much good can I expect this work to achieve?)
These variables are then multiplied together to produce an overall score.
I prefer multiplication, rather than addition, since it favors pathways that score mid-to-high across the board over paths that score high in two variables but very low in one (7 * 7 * 7) >> (10 * 10 * 2). This means, for instance, I’d be disinclined to pick a path where I’d expect to be well-suited and have a high positive impact but which I wouldn’t enjoy.
Top Choice: Policy Research and Writing.
- Well-suitedness: 6.5. I am uncertain about this estimate, given my lack of think-tank experience; however, my skills in synthesizing information and distilling complex policy concepts could prove useful.
- Enjoyability: 7.5. I am a nerd.
- Positive Impact: 6.5. Research could better characterize and advance neglected proposals, but I worry about politicians ignoring researchers, especially due to AI-company lobbying.
- Total Score: 316.9
Second Choice: Policy Advocacy
- Well-suitedness: 8. Most of my experience has been in advocacy.
- Enjoyability: 5.5. I enjoy some aspects of advocacy (writing to policymakers) more than others (grassroots communications).
- Positive Impact: 7. Advocacy is needed to pass ambitious policies. On the other hand, there’s some risk of inadvertently implementing harmful proposals.
- Total Score: 308
Third Choice: Operations and Field-Building.
- Well-suitedness: 7. I have experience in office management, recruitment, and planning. • Enjoyability: 3.5. Risks of boredom and disconnect from organizational mission. • Positive Impact: 6.5. Possibly higher if helping establish fledgling organizations; possibly lower if working with already-robust organizations.
- Total Score: 159.3
My top two choices are clustered close together; my third choice is farther behind. I therefore intend to prioritize opportunities in research and advocacy, while treating operations as a “backup” if these paths don’t work out.
Surprise! An unconventional career option.
In addition to everything described above, I want to become a science fiction novelist.
I am currently 30,000 words into my first novel. In it, humanity has become an interplanetary civilization encountering alien intelligence for the first time, determining how to approach this discovery, and grappling with the path(s) they might take as a species. My participation in AISF has influenced my writing: the interplanetary government that plays a crucial role in the story emerged from a global governance structure in the late 21st century, which itself emerged from a coordination mechanism to prevent an AI catastrophe.
More broadly, this novel allows me to express my beliefs about the human future. I contend that even if we manage to avoid AI-induced suicide, our work will remain ahead of us, and we’ll need to decide what kind of species we become. My book is an attempt to express these challenges. If lucky, I could perhaps join the ranks of science-fiction authors who have shaped our perception of humanity’s future.
I might characterize my science-fiction career as follows:
- Well-suitedness: 9.5. I think I have the potential to become a capable writer. • Enjoyability: 10. I could write all day.
- Positive Impact: 2.5. My work is unlikely to become famous, but if it did, it could influence others to think seriously about the future, including the potential of global governance to reduce AI risk.
- Total Score: 237.5.
This path need not exist to the exclusion of others; I can certainly write while working full-time. I am, however, disincentivized from pursuing extremely demanding (>50-60 hr/week) jobs that would sap my cognitive energy.
Where to go from here:
Based on my most promising career pathways, I see a few obvious next steps:
- Upskilling. Particularly relevant in terms of governance research. I could upskill effectively by focusing on shorter-term research and fellowship opportunities to set me up for success in a longer-term role. If I fail to secure such opportunities, I can work on semi-independent projects, gain feedback and mentorship, and thus increase the likelihood of securing a research position over time.
- Continuing to apply for jobs and fellowships, prioritizing those in research and advocacy. I have already applied to a dozen opportunities; if none of these bear fruit, I intend to keep applying. I will also likely apply to operations-type jobs but will treat these as more of a backup option.
- Continuing to write. Regardless of other plans, I expect to finish my novel early next summer.
My “Plan B”:
Everything I’ve described so far (other than my novel-writing) is part of my Plan A, defined as “directly pursuing AI governance-related work in some capacity.” But this space is extremely competitive, and I could spend months applying and not get any opportunities. What then?
My Plan B is to continue working in environmental policy. I would use this as an upskilling opportunity, spending 1-2 years in environmental policy before re-transitioning into AI policy. I would seek out research-heavy positions, i.e. working at a policy think-tank, to complement my current paucity of research experience; the skills developed in such a role would hopefully make it easier to transition into AI governance research thereafter. I would also continue pursuing independent AI governance research projects, even if working full-time.
One way or another, I expect to work in AI policy within 2 years, certainly with 5 years. If this hasn’t happened, then either I have become much more optimistic about humanity’s chances, or something has gone wrong.
Concluding Remarks: Why Not Just Bike Across Mongolia?
I sometimes feel intimidated by this smorgasbord of considerations. Working in AI governance will likely be stressful, and proximity to doomerism may take its toll on my well-being. It would be much easier to spend my days doing things completely unrelated to existential risk, like riding my bike around the world.
Then I remind myself of how high the stakes truly are; how much potential we risk squandering if we don’t get our act together— and how wonderful the future could be if we get things right.
But I’ve come to realize that these somewhat-abstract motivations aren’t enough on their own. Much of my effectiveness, enthusiasm, and overall happiness will be determined not by big picture philosophies but by how much I enjoy my work day-to-day. 80,000 hours is a long time. Therefore, my ideal career path is one which makes an impact on the largest scale but which also provides happiness and satisfaction on the smallest scale. These considerations may sound “selfish,” but selfishness and impact are intertwined — to make an impact, I need to be motivated over a sustainable period; to be motivated, I need to enjoy what I’m doing.
Granted, I might not enjoy AI governance quite as much as riding my bicycle across Mongolia, but I can live with that.