Why are people building AI systems?

By Adam Jones (Published on May 25, 2024)

Almost everyone appreciates AI could bring significant rewards to its creators. However, the average person seems to have wildly inaccurate intuitions about the scale of the reward at stake, particularly for general transformative AI systems.

Instead, you should have a grounded sense of ‘woah, this could be big’. We think this is valuable for getting in the right headspace around the transformative potential of AI, properly appreciating the incentives for AI companies developing these systems, as well as the concerns different actors might have about AI risks.

Today: automating tasks

We’re already seeing AI systems being deployed in every sector. In fact, we use large language models at BlueDot: including to help us evaluate applications, respond to people’s questions on Slack, and summarise our internal meetings.

McKinsey estimates this could enable productivity growth of 0.1 to 0.6% annually, adding between $2.6-4.4 trillion annually to the world economy. (For comparison, the world GDP is about $100 trillion.) Even if AI companies capture just 25%[1] of this added value by charging for API access, that represents more revenue than Microsoft and Google combined.

This might be an accurate assessment of current AI tools, but probably far underestimates future systems - at least the systems AI companies are explicitly aiming to create.

Soon: replacing human labour

Currently it’s not trivial to automate custom workflows, which is where a lot of business value lies. This might require custom scaffolding, calling APIs in the right way, or lots of ‘glue’ code.

Future AI systems will likely be able to learn new workflows without need for this ‘glue’, take real actions in the world, and better execute strings of tasks. This might look like controlling a computer to take the actions that an employee would.[2] Both Google and Apple have released papers about using LLMs to interact with UIs, and OpenAI is rumoured to be building AI agents to imitate people using computers.

Learning these workflows could enable valuable jobs to be replaced entirely by AI systems.[3] Jobs that could be done entirely remotely account for 46% of US wages. The proportion of jobs that can be done entirely remotely is even higher in the UK and several EU countries.

In addition, most countries have worker shortages in domains that can be done entirely remotely, such as software engineering. These positions could be filled by AI systems. Additionally there may be hidden growth opportunities that AI systems performing those professions cheaply could expose. For example, no sane company would advertise a job vacancy for a software engineer at minimum wage now - but they could stand to gain from doing this if AI software engineers were available at this price or less.

Together, this could be incredibly valuable for AI companies: instead of just increasing the world economy by single-digit percentages, they could be capturing almost half of current wages in developed countries.

Over time, entire teams or companies might become automated by AI systems. This might happen quite rapidly, particularly as people could leverage AI systems to massively scale up their impact on the world. These companies might be able to operate much faster than human companies, making them far more competitive: we might see an explosion of new companies all innovating and competing, resulting in the fraction of economic work being done by AI systems skyrocketing.

And to make it clear: AI companies themselves are knowingly working towards this. OpenAI states they ”will attempt to directly build safe and beneficial AGI”, which they define as “highly autonomous systems that outperform humans at most economically valuable work”. Anthropic “expect rapid AI progress and very large impacts from AI” which they expect “would be very disruptive, changing employment, macroeconomics, and power structures both within and between nations”.

Non-economic motivations

Private companies being driven by economic gain makes sense. This also trickles down to employees there: the median package for engineers at OpenAI is $900k, of which two-thirds is stock options.

However, this isn’t the full picture. There are a wide range of motivations that incentivise people to build AI systems:

  • Doing good in the world. AI systems will likely have a transformative effect on the world, and rapid scientific and technological progress they could bring could improve the lives of billions.[4]
  • AI development is interesting and fun. Many people inherently enjoy working with cutting-edge technology, solving novel problems and building new things.
  • Protecting national interests. Today, most AI development is done in a few private companies. As AI systems become more important for global power, we might see countries start building their own AI systems simply to stay relevant or defend themselves.
  • Solving their own problems. Particularly in the non-commercial space, people may build models to help with all kinds of tasks. For example, fine-tuning models to work in their local language or to role-play specific characters.

Conclusion

Most people radically underestimate the potential AI impacts, thinking a best case might be a small percentage of GDP growth. Actually, most actors in the frontier AI space aim to build systems that will be far more impactful and rewarding - possibly capturing almost half of wages in developed countries. Additionally, many other actors are already building AI systems with very different motivations, and this is likely to become more varied as nation states start getting involved.

Footnotes

  1. It’s generally recognised that there are significant positive externalities of technologies, and that their creators are often unable to capture all the value they add. However, 25% is a finger in the air estimate, because we struggled to find statistics on how much value is captured by creators of technology.

  2. See Open Interpreter’s demo (especially from 4:18 onwards) for a peek of what this might look like.

  3. We think this is a fairly uncontested claim: specifically that the AI systems some AI companies aim to build may replace valuable jobs. This article explicitly does not attempt to make any claim on how feasible this is technically, whether governments will block this, or whether these people will find new jobs, e.g. whether they’ll find a job with comparative advantage or not.

  4. Interestingly, most people in AI safety work on it precisely because they realise how significant the transformative effects of AI could be, and want this to go well. Many actually got into AI safety by trying to develop AI systems themselves, hoping this would do a lot of good, before realising getting these systems to do what you want is hard.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.