AI Governance (Oct 2023)

Beyond Bias and Fallacies: Fixing the AI Safety Debate

By Maxime Fournes (Published on November 12, 2023)

Abstract

In May 2023, hundreds of AI scientists published a statement on the reality of AI Existential Risk (X-Risk). Despite this being the latest in a series of warnings, public awareness and concern remain limited. This discrepancy may be attributed to the complexity of the subject and the prevalence of fallacious or ill-informed counterarguments in the public debate. This is a dangerous situation that needs to be addressed urgently.

For that purpose, this paper introduces a comprehensive framework for clarifying the debate on Artificial Super Intelligence (ASI) and its existential risks. The framework comprises a simple and accessible model of AI x-risk, coupled with a methodology for gathering, organising, and analysing arguments related to this model. The methodology involves assembling arguments from various sources, organising them into coherent hierarchical taxonomies, and systematically analysing and synthesising these arguments to reveal areas of contention and consensus. With its hierarchical structure, the framework aims to present a panoramic view of the current discourse, allowing users to understand the current consensus and explore the underlying reasoning at multiple levels of granularity while countering fallacies and bias.

Additionally, the paper proposes the creation of an online collaborative hub. Although not developed within this project, the hub is designed to encourage ongoing dialogue and knowledge sharing among researchers, experts, and the public. It aspires to become a dynamic platform for evolving consensus on AI x-risk. Future work will focus on the development and promotion of this hub, seeking to establish it as an accessible and unbiased entry point into the debate on AI x-risk.

Read the full piece here.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.