The Stories We Govern By: AI, Risk, and the Power of Imaginaries
- URL: http://arxiv.org/abs/2508.11729v1
- Date: Fri, 15 Aug 2025 09:57:56 GMT
- Title: The Stories We Govern By: AI, Risk, and the Power of Imaginaries
- Authors: Ninell Oldenburg, Gleb Papyshev,
- Abstract summary: This paper examines how competing sociotechnical imaginaries of artificial intelligence (AI) risk shape governance decisions and regulatory constraints.<n>We analyse three dominant narrative groups: existential risk proponents, who emphasise catastrophic AGI scenarios; accelerationists, who portray AI as a transformative force to be unleashed; and critical AI scholars, who foreground present-day harms rooted in systemic inequality.<n>Our findings reveal how these narratives embed distinct assumptions about risk and have the potential to progress into policy-making processes by narrowing the space for alternative governance approaches.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper examines how competing sociotechnical imaginaries of artificial intelligence (AI) risk shape governance decisions and regulatory constraints. Drawing on concepts from science and technology studies, we analyse three dominant narrative groups: existential risk proponents, who emphasise catastrophic AGI scenarios; accelerationists, who portray AI as a transformative force to be unleashed; and critical AI scholars, who foreground present-day harms rooted in systemic inequality. Through an analysis of representative manifesto-style texts, we explore how these imaginaries differ across four dimensions: normative visions of the future, diagnoses of the present social order, views on science and technology, and perceived human agency in managing AI risks. Our findings reveal how these narratives embed distinct assumptions about risk and have the potential to progress into policy-making processes by narrowing the space for alternative governance approaches. We argue against speculative dogmatism and for moving beyond deterministic imaginaries toward regulatory strategies that are grounded in pragmatism.
Related papers
- Responsible AI: The Good, The Bad, The AI [1.932555230783329]
This paper presents a comprehensive examination of AI's dual nature through the lens of strategic information systems.<n>We develop the Paradox-based Responsible AI Governance (PRAIG) framework that articulates: (1) the strategic benefits of AI adoption, (2) the inherent risks and unintended consequences, and (3) governance mechanisms that enable organizations to navigate these tensions.<n>The paper concludes with a research agenda for advancing responsible AI governance scholarship.
arXiv Detail & Related papers (2026-01-28T22:33:27Z) - Why They Disagree: Decoding Differences in Opinions about AI Risk on the Lex Fridman Podcast [0.0]
This paper analyzes contemporary debates about AI risk.<n>We find that differences in perspectives about existential risk ("X-risk") arise from differences in causal premises about design vs. emergence in complex systems.<n>Disagreements about these two forms of AI risk appear to share two properties: neither involves significant disagreements on moral values and both can be described in terms of differing views on the extent of boundedness of human rationality.
arXiv Detail & Related papers (2025-12-06T08:48:30Z) - AI Deception: Risks, Dynamics, and Controls [153.71048309527225]
This project provides a comprehensive and up-to-date overview of the AI deception field.<n>We identify a formal definition of AI deception, grounded in signaling theory from studies of animal deception.<n>We organize the landscape of AI deception research as a deception cycle, consisting of two key components: deception emergence and deception treatment.
arXiv Detail & Related papers (2025-11-27T16:56:04Z) - Empirical AI Ethics: Reconfiguring Ethics towards a Situated, Plural, and Transformative Approach [0.0]
Critics argue that AI ethics frequently serves corporate interests through practices of 'ethics washing'<n>This paper adopts a Science and Technology Studies perspective to critically interrogate the field of AI ethics.
arXiv Detail & Related papers (2025-09-22T12:58:15Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Epistemic Scarcity: The Economics of Unresolvable Unknowns [0.0]
We argue that AI systems are incapable of performing the core functions of economic coordination.<n>We critique dominant ethical AI frameworks as extensions of constructivist rationalism.
arXiv Detail & Related papers (2025-07-02T08:46:24Z) - Exploring the Societal and Economic Impacts of Artificial Intelligence: A Scenario Generation Methodology [0.0]
We categorize and analyze key factors affecting AI's integration and adoption by applying an Impact-Uncertainty Matrix.<n>A proposed methodology involves querying academic databases, identifying emerging trends and topics, and categorizing these into an impact uncertainty framework.<n>The paper identifies critical areas where AI may bring significant change and outlines potential future scenarios based on these insights.
arXiv Detail & Related papers (2025-03-31T18:49:46Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Manipulation Risks in Explainable AI: The Implications of the
Disagreement Problem [0.0]
We provide an overview of the different strategies explanation providers could deploy to adapt the returned explanation to their benefit.
We analyse several objectives and concrete scenarios the providers could have to engage in.
It is crucial to investigate this issue now, before these methods are widely implemented, and propose some mitigation strategies.
arXiv Detail & Related papers (2023-06-24T07:21:28Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.