Collecting the Public Perception of AI and Robot Rights
- URL: http://arxiv.org/abs/2008.01339v1
- Date: Tue, 4 Aug 2020 05:35:29 GMT
- Title: Collecting the Public Perception of AI and Robot Rights
- Authors: Gabriel Lima, Changyeon Kim, Seungho Ryu, Chihyung Jeon, Meeyoung Cha
- Abstract summary: The European Parliament proposed advanced robots could be granted "electronic personalities"
This paper collects online users' first impressions of 11 possible rights that could be granted to autonomous electronic agents of the future.
- Score: 10.791267046450077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Whether to give rights to artificial intelligence (AI) and robots has been a
sensitive topic since the European Parliament proposed advanced robots could be
granted "electronic personalities." Numerous scholars who favor or disfavor its
feasibility have participated in the debate. This paper presents an experiment
(N=1270) that 1) collects online users' first impressions of 11 possible rights
that could be granted to autonomous electronic agents of the future and 2)
examines whether debunking common misconceptions on the proposal modifies one's
stance toward the issue. The results indicate that even though online users
mainly disfavor AI and robot rights, they are supportive of protecting
electronic agents from cruelty (i.e., favor the right against cruel treatment).
Furthermore, people's perceptions became more positive when given information
about rights-bearing non-human entities or myth-refuting statements. The style
used to introduce AI and robot rights significantly affected how the
participants perceived the proposal, similar to the way metaphors function in
creating laws. For robustness, we repeated the experiment over a more
representative sample of U.S. residents (N=164) and found that perceptions
gathered from online users and those by the general population are similar.
Related papers
- Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Consent in Crisis: The Rapid Decline of the AI Data Commons [74.68176012363253]
General-purpose artificial intelligence (AI) systems are built on massive swathes of public web data.
We conduct the first, large-scale, longitudinal audit of the consent protocols for the web domains underlying AI training corpora.
arXiv Detail & Related papers (2024-07-20T16:50:18Z) - What Do People Think about Sentient AI? [0.0]
We present the first nationally representative survey data on the topic of sentient AI.
Across one wave of data collection in 2021 and two in 2023, we found mind perception and moral concern for AI well-being was higher than predicted.
We argue that, whether or not AIs become sentient, the discussion itself may overhaul human-computer interaction.
arXiv Detail & Related papers (2024-07-11T21:04:39Z) - Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - Debunking Robot Rights Metaphysically, Ethically, and Legally [0.10241134756773229]
We argue that machines are not the kinds of things that may be denied or granted rights.
From a legal perspective, the best analogy to robot rights is not human rights but corporate rights.
arXiv Detail & Related papers (2024-04-15T18:23:58Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Artificial Influence: An Analysis Of AI-Driven Persuasion [0.0]
We warn that ubiquitous highlypersuasive AI systems could alter our information environment so significantly so as to contribute to a loss of human control of our own future.
We conclude that none of these solutions will be airtight, and that individuals and governments will need to take active steps to guard against the most pernicious effects of persuasive AI.
arXiv Detail & Related papers (2023-03-15T16:05:11Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [64.77385130665128]
The aim of this workshop is to foster the exchange of insights on past and ongoing research towards effective and long-lasting collaborations between humans and robots.
We particularly focus on AI techniques required to implement autonomous and proactive interactions.
arXiv Detail & Related papers (2022-06-07T11:12:45Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Conflict Between People's Urge to Punish AI and Legal Systems [12.935691101666453]
We present two studies to obtain people's views of electronic legal personhood vis-a-vis existing liability models.
Our study reveals people's desire to punish automated agents even though these entities are not recognized any mental state.
We conclude by discussing how future design and legal decisions may influence how the public reacts to automated agents' wrongdoings.
arXiv Detail & Related papers (2020-03-13T23:19:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.