The Conflict Between People's Urge to Punish AI and Legal Systems
- URL: http://arxiv.org/abs/2003.06507v3
- Date: Thu, 11 Nov 2021 02:36:45 GMT
- Title: The Conflict Between People's Urge to Punish AI and Legal Systems
- Authors: Gabriel Lima, Meeyoung Cha, Chihyung Jeon, Kyungsin Park
- Abstract summary: We present two studies to obtain people's views of electronic legal personhood vis-a-vis existing liability models.
Our study reveals people's desire to punish automated agents even though these entities are not recognized any mental state.
We conclude by discussing how future design and legal decisions may influence how the public reacts to automated agents' wrongdoings.
- Score: 12.935691101666453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Regulating artificial intelligence (AI) has become necessary in light of its
deployment in high-risk scenarios. This paper explores the proposal to extend
legal personhood to AI and robots, which had not yet been examined through the
lens of the general public. We present two studies (N = 3,559) to obtain
people's views of electronic legal personhood vis-\`a-vis existing liability
models. Our study reveals people's desire to punish automated agents even
though these entities are not recognized any mental state. Furthermore, people
did not believe automated agents' punishment would fulfill deterrence nor
retribution and were unwilling to grant them legal punishment preconditions,
namely physical independence and assets. Collectively, these findings suggest a
conflict between the desire to punish automated agents and its perceived
impracticability. We conclude by discussing how future design and legal
decisions may influence how the public reacts to automated agents' wrongdoings.
Related papers
- A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - The Ethics of Automating Legal Actors [58.81546227716182]
We argue that automating the role of the judge raises difficult ethical challenges, in particular for common law legal systems.
Our argument follows from the social role of the judge in actively shaping the law, rather than merely applying it.
Even in the case the models could achieve human-level capabilities, there would still be remaining ethical concerns inherent in the automation of the legal process.
arXiv Detail & Related papers (2023-12-01T13:48:46Z) - The Manipulation Problem: Conversational AI as a Threat to Epistemic
Agency [0.0]
The technology of Conversational AI has made significant advancements over the last eighteen months.
conversational agents are likely to be deployed in the near future that are designed to pursue targeted influence objectives.
Sometimes referred to as the "AI Manipulation Problem," the emerging risk is that consumers will unwittingly engage in real-time dialog with predatory AI agents.
arXiv Detail & Related papers (2023-06-19T04:09:16Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Both eyes open: Vigilant Incentives help Regulatory Markets improve AI
Safety [69.59465535312815]
Regulatory Markets for AI is a proposal designed with adaptability in mind.
It involves governments setting outcome-based targets for AI companies to achieve.
We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal.
arXiv Detail & Related papers (2023-03-06T14:42:05Z) - Measuring an artificial intelligence agent's trust in humans using
machine incentives [2.1016374925364616]
Gauging an AI agent's trust in humans is challenging because dishonesty might respond falsely about their trust in humans.
We present a method for incentivizing machine decisions without altering an AI agent's underlying algorithms or goal orientation.
Our experiments suggest that one of the most advanced AI language models to date alters its social behavior in response to incentives.
arXiv Detail & Related papers (2022-12-27T06:05:49Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Immune Moral Models? Pro-Social Rule Breaking as a Moral Enhancement
Approach for Ethical AI [0.17188280334580192]
Ethical behaviour is a critical characteristic that we would like in a human-centric AI.
To make AI agents more human centric, we argue that there is a need for a mechanism that helps AI agents identify when to break rules.
arXiv Detail & Related papers (2021-06-17T18:44:55Z) - Human Perceptions on Moral Responsibility of AI: A Case Study in
AI-Assisted Bail Decision-Making [8.688778020322758]
We measure people's perceptions of eight different notions of moral responsibility concerning AI and human agents.
We show that AI agents are held causally responsible and blamed similarly to human agents for an identical task.
We find that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature.
arXiv Detail & Related papers (2021-02-01T04:07:38Z) - Collecting the Public Perception of AI and Robot Rights [10.791267046450077]
The European Parliament proposed advanced robots could be granted "electronic personalities"
This paper collects online users' first impressions of 11 possible rights that could be granted to autonomous electronic agents of the future.
arXiv Detail & Related papers (2020-08-04T05:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.