AI, Digital Platforms, and the New Systemic Risk
- URL: http://arxiv.org/abs/2509.17878v1
- Date: Mon, 22 Sep 2025 15:14:23 GMT
- Title: AI, Digital Platforms, and the New Systemic Risk
- Authors: Philipp Hacker, Atoosa Kasirzadeh, Lilian Edwards,
- Abstract summary: We develop a rigorous framework for understanding systemic risk in AI, platform, and hybrid system governance.<n>We argue that recent legislation, including the EU's AI Act and Digital Services Act, invokes systemic risk but relies on narrow or ambiguous characterizations.<n>Our framework highlights novel risk pathways, including the possibility of systemic failures arising from the interaction of multiple AI agents.
- Score: 2.0090452213078445
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As artificial intelligence (AI) becomes increasingly embedded in digital, social, and institutional infrastructures, and AI and platforms are merged into hybrid structures, systemic risk has emerged as a critical but undertheorized challenge. In this paper, we develop a rigorous framework for understanding systemic risk in AI, platform, and hybrid system governance, drawing on insights from finance, complex systems theory, climate change, and cybersecurity - domains where systemic risk has already shaped regulatory responses. We argue that recent legislation, including the EU's AI Act and Digital Services Act (DSA), invokes systemic risk but relies on narrow or ambiguous characterizations of this notion, sometimes reducing this risk to specific capabilities present in frontier AI models, or to harms occurring in economic market settings. The DSA, we show, actually does a better job at identifying systemic risk than the more recent AI Act. Our framework highlights novel risk pathways, including the possibility of systemic failures arising from the interaction of multiple AI agents. We identify four levels of AI-related systemic risk and emphasize that discrimination at scale and systematic hallucinations, despite their capacity to destabilize institutions and fundamental rights, may not fall under current legal definitions, given the AI Act's focus on frontier model capabilities. We then test the DSA, the AI Act, and our own framework on five key examples, and propose reforms that broaden systemic risk assessments, strengthen coordination between regulatory regimes, and explicitly incorporate collective harms.
Related papers
- With Great Capabilities Come Great Responsibilities: Introducing the Agentic Risk & Capability Framework for Governing Agentic AI Systems [11.09031447875337]
Agentic Risk & Capability (ARC) Framework is a technical governance framework designed to help organizations identify, assess, and mitigate risks arising from agentic AI systems.<n>The framework's core contributions are:.<n>It develops a novel capability-centric perspective to analyze a wide range of agentic AI systems.<n>It distills three primary sources of risk intrinsic to agentic AI systems - components, design, and capabilities.<n>It establishes a clear nexus between each risk source, specific materialized risks, and corresponding technical controls.
arXiv Detail & Related papers (2025-12-22T03:51:34Z) - Embodied AI: Emerging Risks and Opportunities for Policy Action [46.48780452120922]
Embodied AI (EAI) systems can exist in, learn from, reason about, and act in the physical world.<n>EAI systems pose significant risks, including physical harm from malicious use, mass surveillance, as well as economic and societal disruption.
arXiv Detail & Related papers (2025-08-28T17:59:07Z) - When Autonomy Goes Rogue: Preparing for Risks of Multi-Agent Collusion in Social Systems [78.04679174291329]
We introduce a proof-of-concept to simulate the risks of malicious multi-agent systems (MAS)<n>We apply this framework to two high-risk fields: misinformation spread and e-commerce fraud.<n>Our findings show that decentralized systems are more effective at carrying out malicious actions than centralized ones.
arXiv Detail & Related papers (2025-07-19T15:17:30Z) - Multi-Agent Risks from Advanced AI [90.74347101431474]
Multi-agent systems of advanced AI pose novel and under-explored risks.<n>We identify three key failure modes based on agents' incentives, as well as seven key risk factors.<n>We highlight several important instances of each risk, as well as promising directions to help mitigate them.
arXiv Detail & Related papers (2025-02-19T23:03:21Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - Beyond Accidents and Misuse: Decoding the Structural Risk Dynamics of Artificial Intelligence [0.0]
This paper advances the concept of structural risk by introducing a framework grounded in complex systems research.<n>We classify structural risks into three categories: antecedent structural causes, antecedent AI system causes, and deleterious feedback loops.<n>To anticipate and govern these dynamics, the paper proposes a methodological agenda incorporating scenario mapping, simulation, and exploratory foresight.
arXiv Detail & Related papers (2024-06-21T05:44:50Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Hazard Management: A framework for the systematic management of root
causes for AI risks [0.0]
This paper introduces the AI Hazard Management (AIHM) framework.
It provides a structured process to systematically identify, assess, and treat AI hazards.
It builds upon an AI hazard list from a comprehensive state-of-the-art analysis.
arXiv Detail & Related papers (2023-10-25T15:55:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.