Operational Agency: A Permeable Legal Fiction for Tracing Culpability in AI Systems
- URL: http://arxiv.org/abs/2602.17932v1
- Date: Fri, 20 Feb 2026 01:49:03 GMT
- Title: Operational Agency: A Permeable Legal Fiction for Tracing Culpability in AI Systems
- Authors: Anirban Mukherjee, Hannah Hanwen Chang,
- Abstract summary: This Article introduces Operational Agency (OA)-a permeable legal fiction structured as an ex post evidentiary framework-and Operational Agency Graph (OAG)<n>OA evaluates an AI's observable operational characteristics.<n>OAG operationalizes that analysis by embedding these characteristics in a causal graph to trace and apportion culpability among developers, fine-tuners, deployers, and users.
- Score: 0.31061678033205636
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern artificial intelligence (AI) systems act with a high degree of independence yet lack legal personhood-a paradox that fractures doctrines grounded in human-centric notions of mens rea and actus reus. This Article introduces Operational Agency (OA)-a permeable legal fiction structured as an ex post evidentiary framework-and Operational Agency Graph (OAG), a tool for mapping causal interactions among human actors, organizations, and AI systems. OA evaluates an AI's observable operational characteristics: its goal-directedness (as a proxy for intent), predictive processing (as a proxy for foresight), and safety architecture (as a proxy for a standard of care). OAG operationalizes that analysis by embedding these characteristics in a causal graph to trace and apportion culpability among developers, fine-tuners, deployers, and users. Drawing on corporate criminal liability, the innocent-agent doctrine, and secondary and vicarious liability frameworks, the Article shows how OA and OAG strengthen existing doctrines. Across five real-world case studies spanning tort, civil rights, constitutional law, and antitrust, it demonstrates how the framework addresses challenges ranging from autonomous vehicle collisions to algorithmic price-fixing, offering courts a principled evidentiary method-and legislatures and industry a conceptual foundation-to ensure human accountability keeps pace with technological autonomy, without conferring personhood on AI.
Related papers
- Agentic AI for Cybersecurity: A Meta-Cognitive Architecture for Governable Autonomy [0.0]
This paper argues that cybersecurity orchestration should be reconceptualized as an agentic, multi-agent cognitive system.<n>We introduce a conceptual framework in which heterogeneous AI agents responsible for detection, hypothesis formation, contextual interpretation, explanation, and governance are coordinated through an explicit meta-cognitive judgement function.<n>Our contribution is to make this cognitive structure architecturally explicit and governable by embedding meta-cognitive judgement as a first-class system function.
arXiv Detail & Related papers (2026-02-12T12:52:49Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - AI Agents and the Law [17.712990593093316]
We show how technical conceptions of agents track some, but not all, socio-legal conceptions of agency.<n>We examine the correlations between implied authority in agency law and the principle of value-alignment in AI.
arXiv Detail & Related papers (2025-08-12T01:18:48Z) - A Moral Agency Framework for Legitimate Integration of AI in Bureaucracies [0.0]
Public-sector bureaucracies seek to reap the benefits of artificial intelligence (AI)<n>We present a three-point Moral Agency Framework for legitimate integration of AI in bureaucratic structures.
arXiv Detail & Related papers (2025-08-11T17:49:19Z) - FAIRTOPIA: Envisioning Multi-Agent Guardianship for Disrupting Unfair AI Pipelines [1.556153237434314]
AI models have become active decision makers, often acting without human supervision.<n>We envision agents as fairness guardians, since agents learn from their environment.<n>We introduce a fairness-by-design approach which embeds multi-role agents in an end-to-end (human to AI) synergetic scheme.
arXiv Detail & Related papers (2025-06-10T17:02:43Z) - Artificial Intelligence in Government: Why People Feel They Lose Control [44.99833362998488]
The use of Artificial Intelligence in public administration is expanding rapidly.<n>While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability.<n>This article applies principal-agent theory to AI adoption as a special case of delegation.
arXiv Detail & Related papers (2025-05-02T07:46:41Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Governing AI Agents [0.2913760942403036]
Article looks at the economic theory of principal-agent problems and the common law doctrine of agency relationships.<n>It identifies problems arising from AI agents, including issues of information asymmetry, discretionary authority, and loyalty.<n>It argues that new technical and legal infrastructure is needed to support governance principles of inclusivity, visibility, and liability.
arXiv Detail & Related papers (2025-01-14T07:55:18Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.