The Conflict Between People's Urge to Punish AI and Legal Systems
- URL: http://arxiv.org/abs/2003.06507v3
- Date: Thu, 11 Nov 2021 02:36:45 GMT
- Title: The Conflict Between People's Urge to Punish AI and Legal Systems
- Authors: Gabriel Lima, Meeyoung Cha, Chihyung Jeon, Kyungsin Park
- Abstract summary: We present two studies to obtain people's views of electronic legal personhood vis-a-vis existing liability models.
Our study reveals people's desire to punish automated agents even though these entities are not recognized any mental state.
We conclude by discussing how future design and legal decisions may influence how the public reacts to automated agents' wrongdoings.
- Score: 12.935691101666453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Regulating artificial intelligence (AI) has become necessary in light of its
deployment in high-risk scenarios. This paper explores the proposal to extend
legal personhood to AI and robots, which had not yet been examined through the
lens of the general public. We present two studies (N = 3,559) to obtain
people's views of electronic legal personhood vis-\`a-vis existing liability
models. Our study reveals people's desire to punish automated agents even
though these entities are not recognized any mental state. Furthermore, people
did not believe automated agents' punishment would fulfill deterrence nor
retribution and were unwilling to grant them legal punishment preconditions,
namely physical independence and assets. Collectively, these findings suggest a
conflict between the desire to punish automated agents and its perceived
impracticability. We conclude by discussing how future design and legal
decisions may influence how the public reacts to automated agents' wrongdoings.
Related papers
- Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.
In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.
Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Towards a Theory of AI Personhood [1.6317061277457001]
We outline necessary conditions for AI personhood, focusing on agency, theory-of-mind, and self-awareness.
If AI systems can be considered persons, then typical framings of AI alignment may be incomplete.
arXiv Detail & Related papers (2025-01-23T10:31:26Z) - Exploring the Impact of Rewards on Developers' Proactive AI Accountability Behavior [0.0]
We develop a theoretical model grounded in Self-Determination Theory to uncover the potential impact of rewards and sanctions on AI developers.
We identify typical sanctions and bug bounties as potential reward mechanisms by surveying related research from various domains.
arXiv Detail & Related papers (2024-11-27T14:34:44Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - The Ethics of Automating Legal Actors [58.81546227716182]
We argue that automating the role of the judge raises difficult ethical challenges, in particular for common law legal systems.
Our argument follows from the social role of the judge in actively shaping the law, rather than merely applying it.
Even in the case the models could achieve human-level capabilities, there would still be remaining ethical concerns inherent in the automation of the legal process.
arXiv Detail & Related papers (2023-12-01T13:48:46Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Both eyes open: Vigilant Incentives help Regulatory Markets improve AI
Safety [69.59465535312815]
Regulatory Markets for AI is a proposal designed with adaptability in mind.
It involves governments setting outcome-based targets for AI companies to achieve.
We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal.
arXiv Detail & Related papers (2023-03-06T14:42:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Immune Moral Models? Pro-Social Rule Breaking as a Moral Enhancement
Approach for Ethical AI [0.17188280334580192]
Ethical behaviour is a critical characteristic that we would like in a human-centric AI.
To make AI agents more human centric, we argue that there is a need for a mechanism that helps AI agents identify when to break rules.
arXiv Detail & Related papers (2021-06-17T18:44:55Z) - Human Perceptions on Moral Responsibility of AI: A Case Study in
AI-Assisted Bail Decision-Making [8.688778020322758]
We measure people's perceptions of eight different notions of moral responsibility concerning AI and human agents.
We show that AI agents are held causally responsible and blamed similarly to human agents for an identical task.
We find that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature.
arXiv Detail & Related papers (2021-02-01T04:07:38Z) - Collecting the Public Perception of AI and Robot Rights [10.791267046450077]
The European Parliament proposed advanced robots could be granted "electronic personalities"
This paper collects online users' first impressions of 11 possible rights that could be granted to autonomous electronic agents of the future.
arXiv Detail & Related papers (2020-08-04T05:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.