AI and Its Impacts: A Brief Introduction

By Vojta Kovarik and Cara Selvarajah (Published on October 21, 2024)

In this course, we are concerned with Artificial Intelligence – but what is it?

This is a difficult question, because of the AI effect: When something is hard for us – for example, playing chess well – we believe it must require “true intelligence”. But once we understand the problem well enough to make a computer solve it, it stops being “AI” and becomes “just computation”.

For this reason, we adopt a very broad view where AI is any method that allows machines to solve problems.

Algorithms

To understand existing AI techniques, we can consider existing tools and inventions they enabled:

  • Arithmetic algorithms, which are simple rules enabling pocket calculators to add and multiply numbers faster and more reliably than humans.
  • Hardcoded software such as operating systemsMicrosoft Excel, or email – which could be viewed as many simple rules that combine to create a powerful overall effect.
  • Expert systems such as DeepBlue, the first computer to win a game against the world chess champion – which combine fast search with clever heuristics.
  • Reinforcement learning – which enabled computers to play various board- and computer games at a superhuman level, without requiring any human knowledge.

Neural networks and large language models (LLMs) are the latest additions to the AI toolkit. However, it is likely that we will come up with further techniques in the future.

It is important to keep in mind that new techniques do not replace all previous methods. Rather, we tend to combine the best aspects of all techniques we have discovered.[1]

As a result, just because the latest method has certain limitations, it does not mean that AI in general will also have those same limitations. For example, large language models are notorious for their frequent mistakes in basic arithmetic,[2] yet this weakness can be overcome by giving the LLM access to a calculator program.[3]

Compute

So far, we discussed AI techniques. However, the performance of AI depends not only the technique we choose, but also on the amount of “muscle” we put behind it – that is, the amount of computational resources (“compute”) that power the AI.

Neural networks only became popular in the 2010s[4] despite being known since at least the 1970s – this was because earlier, we didn’t have enough compute to train them. Similarly, the language model GPT-3 is vastly better than GPT-2 despite both implementing essentially the same algorithm – the key difference is that GPT-3’s training uses 200-times more compute (and more data).[5]

Compute scaling matters because that the amount of available compute increases exponentially. Compared to 15 years ago, a typical frontier AI project in 2023 used roughly 10,000,000,000-times more training compute.[6] This is in part driven by the growing investment in AI, but also by technological innovations that result in exponentially decreasing costs of compute: an iPhone from 2020 is 10,000-faster than a $50M supercomputer from the 1980s.[7]

Reasoning about the Impact of AI

Finally, this course is concerned with AI is because of AI’s potential impacts on the world. But how should we think about these impacts?

We often imagine AI as a single powerful system. We can then ask questions such as: What is the AI’s motivation? How does the AI’s internal reasoning work? Which actions will it take, and how will they affect the world?

While this framing is useful, it is misleading in many ways. We would in particular like to highlight two crucial considerations that are missing in this view:

  • First, there will likely be multiple types of AI, each with a vast number of copies running in various parts of our economy and society. We then need to consider the interactions between different AIs, and between AIs and humans.[8]
  • Second, AIs will increasingly become inseparable from many organisations. It might therefore become more appropriate to view the AI and the company that uses it as a single entity.[9] To demonstrate this framing, consider the idea of shutting down a misbehaving AI. Where the AI is a standalone entity, many researchers propose just turning off the computer.[10] However, imagine instead that the AI in question is at least as closely tied to a powerful institution as the Facebook platform is to its parent company Meta.[11] Under this view, it becomes apparent that the plan of “just turning off the AI” can be far less actionable than one might initially believe.

Despite this, this course often adopts the simplified frame of a single AI system. This is because this view is simpler to discuss while also being good enough for introducing many of the key topics.

However, this course is ultimately in service of ensuring that AI has beneficial impacts, irrespectively of what framing of “impact” is simpler to consider.[12] For this reason, we recommend occasionally adopting these richer frames during individual reading and discussions with other participants.

Footnotes

  1. For example, the above-mentioned game playing algorithms (AlphaZero, PPO) critically rely on both neural networks and reinforcement learning.

  2. For example, as of October 2024, both Anthropic’s chatbot Claude 3.5 Sonnet and OpenAI’s ChatGPT 4o tend to give wrong answers more often not when asked to multiply a * b for randomly chosen 10-digit numbers.

  3. This has already been done, for example, in the paper Toolformer: Language Models Can Teach Themselves to Use Tools. However, our primary argument here is that overcoming the weakness of one method by combining it with other methods is sensible, and there are no fundamental reasons why it should not work.

  4. An often-cited milestone is AlexNet (2012), which achieved remarkable performance on the image classification task ImageNet.

  5. For more examples and more specific arguments, see the optional resources for Session 1, and Why and how of scaling large language models.

  6. For some datapoints on these trends, see Compute Trends Across Three Eras of Machine Learning or Key Trends and Figures in Machine Learning by Epoch AI.

  7. Source: Fast-forward — comparing a 1980s supercomputer to the modern smartphone (the price has been converted to 2024 dollars). Some other trends can be found at Our World in Data.

  8. Many of the relevant problems are studied by the recently-founded field of Cooperative AI. While we view them as crucial to consider when reasoning about the impact of AI, they mostly fall outside the scope of this course.

  9. As with the previous point, this is an incredibly complicated topic which falls outside of the scope of this course. Some of the relevant problems are discussed under the umbrellas of AI Governance and AI strategy.

  10. There may be some challenges with this as a solution, see Corrigibility. Or for a more accessible introduction, the AI “Stop Button” Problem. But we’ll ignore these for now.

  11. Similar issues are likely to arise once AI becomes deeply integrated into the militaries of major world powers.

  12. See “searching for the keys where the light is”.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.