Playing the Blame Game with Robots
- URL: http://arxiv.org/abs/2102.04527v1
- Date: Mon, 8 Feb 2021 20:53:42 GMT
- Title: Playing the Blame Game with Robots
- Authors: Markus Kneer and Michael T. Stuart
- Abstract summary: We find that people are willing to ascribe moral blame to AI systems in contexts of recklessness.
The higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent research shows -- somewhat astonishingly -- that people are willing to
ascribe moral blame to AI-driven systems when they cause harm [1]-[4]. In this
paper, we explore the moral-psychological underpinnings of these findings. Our
hypothesis was that the reason why people ascribe moral blame to AI systems is
that they consider them capable of entertaining inculpating mental states (what
is called mens rea in the law). To explore this hypothesis, we created a
scenario in which an AI system runs a risk of poisoning people by using a novel
type of fertilizer. Manipulating the computational (or quasi-cognitive)
abilities of the AI system in a between-subjects design, we tested whether
people's willingness to ascribe knowledge of a substantial risk of harm (i.e.,
recklessness) and blame to the AI system. Furthermore, we investigated whether
the ascription of recklessness and blame to the AI system would influence the
perceived blameworthiness of the system's user (or owner). In an experiment
with 347 participants, we found (i) that people are willing to ascribe blame to
AI systems in contexts of recklessness, (ii) that blame ascriptions depend
strongly on the willingness to attribute recklessness and (iii) that the
latter, in turn, depends on the perceived "cognitive" capacities of the system.
Furthermore, our results suggest (iv) that the higher the computational
sophistication of the AI system, the more blame is shifted from the human user
to the AI system.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Taking AI Welfare Seriously [0.5617572524191751]
We argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future.
It is an issue for the near future, and AI companies and other actors have a responsibility to start taking it seriously.
arXiv Detail & Related papers (2024-11-04T17:57:57Z) - Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Blaming Humans and Machines: What Shapes People's Reactions to
Algorithmic Harm [11.960178399478721]
We investigate how several factors influence people's reactive attitudes towards machines, designers, and users.
Whether AI systems were explainable did not impact blame directed at them, their developers, and their users.
We discuss implications, such as how future decisions about including AI systems in the social and moral spheres will shape laypeople's reactions to AI-caused harm.
arXiv Detail & Related papers (2023-04-05T00:50:07Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Human Perceptions on Moral Responsibility of AI: A Case Study in
AI-Assisted Bail Decision-Making [8.688778020322758]
We measure people's perceptions of eight different notions of moral responsibility concerning AI and human agents.
We show that AI agents are held causally responsible and blamed similarly to human agents for an identical task.
We find that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature.
arXiv Detail & Related papers (2021-02-01T04:07:38Z) - AI Failures: A Review of Underlying Issues [0.0]
We focus on AI failures on account of flaws in conceptualization, design and deployment.
We find that AI systems fail on account of omission and commission errors in the design of the AI system.
An AI system is quite likely to fail in situations where, in effect, it is called upon to deliver moral judgments.
arXiv Detail & Related papers (2020-07-18T15:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.