Why are people building AI systems?
AI could bring significant rewards to its creators. However, the average person seems to have wildly inaccurate intuitions about the scale of these rewards.
By exploring conservative estimates of the potential rewards AI companies could expect to see from the automation of human labour, this article tries to convey a grounded sense of ‘woah, this could be big’.
The transformative effect on jobs is not necessarily the most significant effect we are likely to see from advancing AI. However, zooming in on a single area can still help us appreciate the huge impacts transformative AI might have, and how we might expect AI to be developed.
Today: automating tasks
AI systems are being deployed in every economic sector. In fact, we use large language models at BlueDot: including to help us evaluate applications, respond to people’s questions on Slack, and summarise our internal meetings.
McKinsey estimates this could add $2.6-4.4 trillion annually to the world economy. (For comparison, the world GDP is about $100 trillion.) Even if AI companies capture just 25%[1] of this added value by charging for API access, that represents more revenue than Microsoft and Google combined.
This might be an accurate assessment of current AI tools, but probably far underestimates future systems - at least the systems AI companies are explicitly aiming to create.
Soon: replacing human labour
It’s not currently trivial to automate custom workflows, which is where a lot of business value lies. This might require custom scaffolding, calling APIs in the right way, or lots of ‘glue’ code.
Future AI systems will likely be able to learn new workflows without need for this ‘glue’, take real actions in the world, and better execute strings of tasks. This might look like controlling a computer to take the actions that an employee would.
Learning these workflows could enable valuable jobs to be replaced entirely by AI systems.[2] Jobs that could be done entirely remotely account for 46% of US wages. The proportion of jobs that can be done entirely remotely is even higher in the UK and several EU countries.
In addition, most countries have worker shortages in remote jobs that could be fulfilled by AI, such as in software engineering. AI systems performing professions cheaply could fill these jobs, creating new growth opportunities. For example, companies cannot currently hire an experienced software engineer at minimum wage - but they might be able to hire an experienced AI agent software engineer at this price.
Together, this could be incredibly valuable for AI companies: instead of just increasing the world economy by single-digit percentages, they could be capturing almost half of current wages in developed countries.
Over time, entire teams or companies might become automated by AI systems. This might happen rapidly, particularly as people could leverage AI systems to massively scale up their impact on the world. These companies could operate much faster than human companies, making them far more competitive: we might see an explosion of new companies all innovating and competing, resulting in the fraction of economic work being done by AI systems skyrocketing.
And to make it clear: AI companies themselves are knowingly working towards this. OpenAI state they “will attempt to directly build safe and beneficial AGI”, which they define as “highly autonomous systems that outperform humans at most economically valuable work”. Anthropic “expect rapid AI progress and very large impacts from AI” which they expect “would be very disruptive, changing employment, macroeconomics, and power structures both within and between nations”.
Non-economic motivations
Private companies being driven by economic gain makes sense. This also trickles down to employees there: the median package for engineers at OpenAI is $900k, of which two-thirds is stock options.
However, this isn’t the full picture. There are a wide range of motivations that incentivise people to build AI systems:
- Doing good in the world. AI systems will likely have a transformative effect on the world, and the rapid scientific and technological progress they could bring could improve the lives of billions.[3][4]
- AI development is interesting and fun. Many people inherently enjoy working with cutting-edge technology, solving novel problems and building new things.
- Protecting national interests. Today, most AI development is done in a few private companies. As AI systems become more important for global power, we might see countries start building their own AI systems to stay economically relevant or for military purposes.
- Solving their own problems. Particularly in the non-commercial space, people may build models to help with all kinds of tasks. For example, fine-tuning models to work in their local language or to role-play specific characters.
Conclusion
Most people radically underestimate the potential impact AI could have, thinking a best case might be a small percentage of GDP growth. Actually, most actors in the frontier AI space aim to build systems that will have a much greater impact - possibly capturing almost half of wages in developed countries. Additionally, many other actors are already building AI systems with very different motivations, and this is likely to become more varied as nation states start getting involved.
Footnotes
It’s generally recognised that there are significant positive externalities of technologies, and that their creators are often unable to capture all the value they add. However, 25% is a finger in the air estimate, because we struggled to find statistics on how much value is captured by creators of technology.
We think this is a fairly uncontested claim: specifically that the AI systems some AI companies aim to build may replace valuable jobs. This article explicitly does not attempt to make any claim on how feasible this is technically, whether governments will block this, or whether these people will find new jobs, e.g. whether they’ll find a job with comparative advantage or not.
Many people in AI safety work on it precisely because they realise how significant the transformative effects of AI could be, and want this to go well. Many actually got into AI safety by trying to develop AI systems themselves, hoping this would do a lot of good, before realising getting these systems to do what you want is hard.
Also see Dario Amodei’s recent article Machines of Loving Grace. He is the CEO of Anthropic, a major frontier AI company.