Fluid Agency in AI Systems: A Case for Functional Equivalence in Copyright, Patent, and Tort
- URL: http://arxiv.org/abs/2601.02633v1
- Date: Tue, 06 Jan 2026 01:06:07 GMT
- Title: Fluid Agency in AI Systems: A Case for Functional Equivalence in Copyright, Patent, and Tort
- Authors: Anirban Mukherjee, Hannah Hanwen Chang,
- Abstract summary: Modern Artificial Intelligence (AI) systems lack human-like consciousness or culpability.<n> Fluid agency generates valuable outputs but collapses attribution.<n>This Article argues that only functional equivalence stabilizes doctrine.
- Score: 0.31061678033205636
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern Artificial Intelligence (AI) systems lack human-like consciousness or culpability, yet they exhibit fluid agency: behavior that is (i) stochastic (probabilistic and path-dependent), (ii) dynamic (co-evolving with user interaction), and (iii) adaptive (able to reorient across contexts). Fluid agency generates valuable outputs but collapses attribution, irreducibly entangling human and machine inputs. This fundamental unmappability fractures doctrines that assume traceable provenance--authorship, inventorship, and liability--yielding ownership gaps and moral "crumple zones." This Article argues that only functional equivalence stabilizes doctrine. Where provenance is indeterminate, legal frameworks must treat human and AI contributions as equivalent for allocating rights and responsibility--not as a claim of moral or economic parity but as a pragmatic default. This principle stabilizes doctrine across domains, offering administrable rules: in copyright, vesting ownership in human orchestrators without parsing inseparable contributions; in patent, tying inventor-of-record status to human orchestration and reduction to practice, even when AI supplies the pivotal insight; and in tort, replacing intractable causation inquiries with enterprise-level and sector-specific strict or no-fault schemes. The contribution is both descriptive and normative: fluid agency explains why origin-based tests fail, while functional equivalence supplies an outcome-focused framework to allocate rights and responsibility when attribution collapses.
Related papers
- The Principle of Proportional Duty: A Knowledge-Duty Framework for Ethical Equilibrium in Human and Artificial Systems [0.0]
This paper introduces the Principle of Proportional Duty (PPD), a novel framework that models how ethical responsibility scales with an agent's epistemic state.<n>As uncertainty increases, Action Duty (the duty to act decisively) is proportionally converted into Repair Duty (the active duty to verify, inquire, and resolve uncertainty).<n>This paper applies the framework across four domains, clinical ethics, recipient-rights law, economic governance, and artificial intelligence, to demonstrate its cross-disciplinary validity.
arXiv Detail & Related papers (2025-12-07T02:37:07Z) - A Pragmatic View of AI Personhood [45.069027101429704]
Agentic Artificial Intelligence is set to trigger a "Cambrian explosion" of new kinds of personhood.<n>This paper proposes a pragmatic framework for navigating this diversification.<n>We argue that this traditional bundle can be unbundled, creating bespoke solutions for different contexts.
arXiv Detail & Related papers (2025-10-30T11:36:34Z) - Epistemic Scarcity: The Economics of Unresolvable Unknowns [0.0]
We argue that AI systems are incapable of performing the core functions of economic coordination.<n>We critique dominant ethical AI frameworks as extensions of constructivist rationalism.
arXiv Detail & Related papers (2025-07-02T08:46:24Z) - Resource Rational Contractualism Should Guide AI Alignment [69.07915246220985]
Contractualist alignment proposes grounding decisions in agreements that diverse stakeholders would endorse.<n>We propose Resource-Rationalism: a framework where AI systems approximate the agreements rational parties would form.<n>An RRC-aligned agent would not only operate efficiently, but also be equipped to dynamically adapt to and interpret the ever-changing human social world.
arXiv Detail & Related papers (2025-06-20T18:57:13Z) - Ethical AI: Towards Defining a Collective Evaluation Framework [0.3413711585591077]
Artificial Intelligence (AI) is transforming sectors such as healthcare, finance, and autonomous systems.<n>Yet its rapid integration raises urgent ethical concerns related to data ownership, privacy, and systemic bias.<n>This article proposes a modular ethical assessment framework built on ontological blocks of meaning-discrete, interpretable units.
arXiv Detail & Related papers (2025-05-30T21:10:47Z) - Stochastic, Dynamic, Fluid Autonomy in Agentic AI: Implications for Authorship, Inventorship, and Liability [0.2209921757303168]
Agentic AI systems autonomously pursue goals, adapting strategies through implicit learning.<n>Human and machine contributions become irreducibly entangled in intertwined creative processes.<n>We argue that legal and policy frameworks may need to treat human and machine contributions as functionally equivalent.
arXiv Detail & Related papers (2025-04-05T04:44:59Z) - Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [51.85131234265026]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.<n>I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Revisiting the Performance-Explainability Trade-Off in Explainable
Artificial Intelligence (XAI) [0.0]
We argue that it is best approached in a nuanced way that incorporates resource availability, domain characteristics, and considerations of risk.
This work aims to advance the field of Requirements Engineering for AI.
arXiv Detail & Related papers (2023-07-26T15:07:40Z) - Transporting Causal Mechanisms for Unsupervised Domain Adaptation [98.67770293233961]
We propose Transporting Causal Mechanisms (TCM) to identify the confounder stratum and representations.
TCM achieves state-of-the-art performance on three challenging Unsupervised Domain Adaptation benchmarks.
arXiv Detail & Related papers (2021-07-23T07:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.