Unbounded Harms, Bounded Law: Liability in the Age of Borderless AI
- URL: http://arxiv.org/abs/2601.12646v1
- Date: Mon, 19 Jan 2026 01:44:14 GMT
- Title: Unbounded Harms, Bounded Law: Liability in the Age of Borderless AI
- Authors: Ha-Chi Tran,
- Abstract summary: The rapid proliferation of artificial intelligence (AI) has exposed significant deficiencies in risk governance.<n>Core legal questions regarding liability allocation, responsibility attribution, and remedial effectiveness remain insufficiently theorized and institutionalized.<n>This paper examines compensation and liability frameworks from high-risk transnational domains.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid proliferation of artificial intelligence (AI) has exposed significant deficiencies in risk governance. While ex-ante harm identification and prevention have advanced, Responsible AI scholarship remains underdeveloped in addressing ex-post liability. Core legal questions regarding liability allocation, responsibility attribution, and remedial effectiveness remain insufficiently theorized and institutionalized, particularly for transboundary harms and risks that transcend national jurisdictions. Drawing on contemporary AI risk analyses, we argue that such harms are structurally embedded in global AI supply chains and are likely to escalate in frequency and severity due to cross-border deployment, data infrastructures, and uneven national oversight capacities. Consequently, territorially bounded liability regimes are increasingly inadequate. Using a comparative and interdisciplinary approach, this paper examines compensation and liability frameworks from high-risk transnational domains - including vaccine injury schemes, systemic financial risk governance, commercial nuclear liability, and international environmental regimes - to distill transferable legal design principles such as strict liability, risk pooling, collective risk-sharing, and liability channelling, while highlighting potential structural constraints on their application to AI-related harms. Situated within an international order shaped more by AI arms race dynamics than cooperative governance, the paper outlines the contours of a global AI accountability and compensation architecture, emphasizing the tension between geopolitical rivalry and the collective action required to govern transboundary AI risks effectively.
Related papers
- AI, Digital Platforms, and the New Systemic Risk [2.0090452213078445]
We develop a rigorous framework for understanding systemic risk in AI, platform, and hybrid system governance.<n>We argue that recent legislation, including the EU's AI Act and Digital Services Act, invokes systemic risk but relies on narrow or ambiguous characterizations.<n>Our framework highlights novel risk pathways, including the possibility of systemic failures arising from the interaction of multiple AI agents.
arXiv Detail & Related papers (2025-09-22T15:14:23Z) - Toward a Unified Security Framework for AI Agents: Trust, Risk, and Liability [2.8407281360114527]
The Trust, Risk and Liability (TRL) framework proposed in this paper ties together the interdependent relationships of trust, risk, and liability to provide a systematic method of building and enhancing trust.<n>The implications of the TRL framework lie in its potential societal impacts, economic impacts, ethical impacts, and more.<n>It is expected to bring remarkable values to addressing potential challenges and promoting trustworthy, risk-free, and responsible usage of AI in 6G networks.
arXiv Detail & Related papers (2025-09-18T01:55:03Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Catastrophic Liability: Managing Systemic Risks in Frontier AI Development [0.4999814847776098]
frontier AI development poses potential systemic risks that could affect society at a massive scale.<n>Current practices at many AI labs lack sufficient transparency around safety measures, testing procedures, and governance structures.<n>We propose a comprehensive approach to safety documentation and accountability in frontier AI development.
arXiv Detail & Related papers (2025-05-01T15:47:14Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - AI Risk Management Should Incorporate Both Safety and Security [185.68738503122114]
We argue that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security.
We introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security.
arXiv Detail & Related papers (2024-05-29T21:00:47Z) - A risk-based approach to assessing liability risk for AI-driven harms
considering EU liability directive [0.0]
Historical instances of harm caused by AI have led to European Union establishing an AI Liability Directive.
The future ability of provider to contest a product liability claim will depend on good practices adopted in designing, developing, and maintaining AI systems.
This paper provides a risk-based approach to examining liability for AI-driven injuries.
arXiv Detail & Related papers (2023-12-18T15:52:43Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Hazard Management: A framework for the systematic management of root
causes for AI risks [0.0]
This paper introduces the AI Hazard Management (AIHM) framework.
It provides a structured process to systematically identify, assess, and treat AI hazards.
It builds upon an AI hazard list from a comprehensive state-of-the-art analysis.
arXiv Detail & Related papers (2023-10-25T15:55:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.