From Slaves to Synths? Superintelligence and the Evolution of Legal Personality
- URL: http://arxiv.org/abs/2601.02773v1
- Date: Tue, 06 Jan 2026 07:09:55 GMT
- Title: From Slaves to Synths? Superintelligence and the Evolution of Legal Personality
- Authors: Simon Chesterman,
- Abstract summary: Legal systems have long been open to extending personhood to non-human entities.<n>The paper argues that the eventual development of superintelligence may force a paradigmatic shift in our understanding of law itself.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This essay examines the evolving concept of legal personality through the lens of recent developments in artificial intelligence and the possible emergence of superintelligence. Legal systems have long been open to extending personhood to non-human entities, most prominently corporations, for instrumental or inherent reasons. Instrumental rationales emphasize accountability and administrative efficiency, whereas inherent ones appeal to moral worth and autonomy. Neither is yet sufficient to justify conferring personhood on AI. Nevertheless, the acceleration of technological autonomy may lead us to reconsider how law conceptualizes agency and responsibility. Drawing on comparative jurisprudence, corporate theory, and the emerging literature on AI governance, the paper argues that existing frameworks can address short-term accountability gaps, but the eventual development of superintelligence may force a paradigmatic shift in our understanding of law itself. In such a speculative future, legal personality may depend less on the cognitive sophistication of machines than on humanity's ability to preserve our own moral and institutional sovereignty.
Related papers
- A Moral Agency Framework for Legitimate Integration of AI in Bureaucracies [0.0]
Public-sector bureaucracies seek to reap the benefits of artificial intelligence (AI)<n>We present a three-point Moral Agency Framework for legitimate integration of AI in bureaucratic structures.
arXiv Detail & Related papers (2025-08-11T17:49:19Z) - Epistemic Scarcity: The Economics of Unresolvable Unknowns [0.0]
We argue that AI systems are incapable of performing the core functions of economic coordination.<n>We critique dominant ethical AI frameworks as extensions of constructivist rationalism.
arXiv Detail & Related papers (2025-07-02T08:46:24Z) - Artificial Intelligence (AI) and the Relationship between Agency, Autonomy, and Moral Patiency [0.0]
We argue that while current AI systems are highly sophisticated, they lack genuine agency and autonomy.<n>We do not rule out the possibility of future systems that could achieve a limited form of artificial moral agency without consciousness.
arXiv Detail & Related papers (2025-04-11T03:48:40Z) - Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [51.85131234265026]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.<n>I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization [0.0]
We show how much of the criticisms directed towards AI systems spring from well known tensions at the heart of Weberian rationalization.
Our analysis shows that building a machine-like tax system that promotes social and economic equality is possible.
It also highlights that AI driven policy optimization comes at the exclusion of other competing political values.
arXiv Detail & Related papers (2024-07-07T11:54:14Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - AI and Legal Argumentation: Aligning the Autonomous Levels of AI Legal
Reasoning [0.0]
Legal argumentation is a vital cornerstone of justice, underpinning an adversarial form of law.
Extensive research has attempted to augment or undertake legal argumentation via the use of computer-based automation including Artificial Intelligence (AI)
An innovative meta-approach is proposed to apply the Levels of Autonomy (LoA) of AI Legal Reasoning to the maturation of AI and Legal Argumentation (AILA)
arXiv Detail & Related papers (2020-09-11T22:05:40Z) - Reasonable Machines: A Research Manifesto [0.0]
A sound ecosystem of trust requires ways for autonomously justify their actions.
Building on social reasoning models such as moral and legal philosophy.
Enabling normative communication creates trust and opens new dimensions of AI application.
arXiv Detail & Related papers (2020-08-14T08:51:33Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.