How Should the Law Treat Future AI Systems? Fictional Legal Personhood versus Legal Identity
- URL: http://arxiv.org/abs/2511.14964v1
- Date: Tue, 18 Nov 2025 23:08:28 GMT
- Title: How Should the Law Treat Future AI Systems? Fictional Legal Personhood versus Legal Identity
- Authors: Heather J. Alexander, Jonathan A. Simon, Frédéric Pinard,
- Abstract summary: We will assess whether we maximize overall long-term legal coherence by (A) maintaining an object classification for all future AI systems, (B) creating fictional legal persons associated with suitably advanced, individuated AI systems, or (C) recognizing non-fictional legal personhood through legal identity for suitably advanced, individuated AI systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The law draws a sharp distinction between objects and persons, and between two kinds of persons, the ''fictional'' kind (i.e. corporations), and the ''non-fictional'' kind (individual or ''natural'' persons). This paper will assess whether we maximize overall long-term legal coherence by (A) maintaining an object classification for all future AI systems, (B) creating fictional legal persons associated with suitably advanced, individuated AI systems (giving these fictional legal persons derogable rights and duties associated with certified groups of existing persons, potentially including free speech, contract rights, and standing to sue ''on behalf of'' the AI system), or (C) recognizing non-fictional legal personhood through legal identity for suitably advanced, individuated AI systems (recognizing them as entities meriting legal standing with non-derogable rights which for the human case include life, due process, habeas corpus, freedom from slavery, and freedom of conscience). We will clarify the meaning and implications of each option along the way, considering liability, copyright, family law, fundamental rights, civil rights, citizenship, and AI safety regulation. We will tentatively find that the non-fictional personhood approach may be best from a coherence perspective, for at least some advanced AI systems. An object approach may prove untenable for sufficiently humanoid advanced systems, though we suggest that it is adequate for currently existing systems as of 2025. While fictional personhood would resolve some coherence issues for future systems, it would create others and provide solutions that are neither durable nor fit for purpose. Finally, our review will suggest that ''hybrid'' approaches are likely to fail and lead to further incoherence: the choice between object, fictional person and non-fictional person is unavoidable.
Related papers
- From Slaves to Synths? Superintelligence and the Evolution of Legal Personality [0.0]
Legal systems have long been open to extending personhood to non-human entities.<n>The paper argues that the eventual development of superintelligence may force a paradigmatic shift in our understanding of law itself.
arXiv Detail & Related papers (2026-01-06T07:09:55Z) - Resource Rational Contractualism Should Guide AI Alignment [69.07915246220985]
Contractualist alignment proposes grounding decisions in agreements that diverse stakeholders would endorse.<n>We propose Resource-Rationalism: a framework where AI systems approximate the agreements rational parties would form.<n>An RRC-aligned agent would not only operate efficiently, but also be equipped to dynamically adapt to and interpret the ever-changing human social world.
arXiv Detail & Related papers (2025-06-20T18:57:13Z) - Towards a Theory of AI Personhood [1.6317061277457001]
We outline necessary conditions for AI personhood, focusing on agency, theory-of-mind, and self-awareness.<n>If AI systems can be considered persons, then typical framings of AI alignment may be incomplete.
arXiv Detail & Related papers (2025-01-23T10:31:26Z) - Debunking Robot Rights Metaphysically, Ethically, and Legally [0.10241134756773229]
We argue that machines are not the kinds of things that may be denied or granted rights.
From a legal perspective, the best analogy to robot rights is not human rights but corporate rights.
arXiv Detail & Related papers (2024-04-15T18:23:58Z) - A Case for AI Safety via Law [0.0]
How to make artificial intelligence (AI) systems safe and aligned with human values is an open research question.
Proposed solutions tend toward relying on human intervention in uncertain situations.
This paper makes a case that effective legal systems are the best way to address AI safety.
arXiv Detail & Related papers (2023-07-31T19:55:27Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Law Informs Code: A Legal Informatics Approach to Aligning Artificial
Intelligence with Humans [0.0]
Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives.
"Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI.
arXiv Detail & Related papers (2022-09-14T00:49:09Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.