Blaming Humans and Machines: What Shapes People's Reactions to
Algorithmic Harm
- URL: http://arxiv.org/abs/2304.02176v1
- Date: Wed, 5 Apr 2023 00:50:07 GMT
- Title: Blaming Humans and Machines: What Shapes People's Reactions to
Algorithmic Harm
- Authors: Gabriel Lima, Nina Grgi\'c-Hla\v{c}a, Meeyoung Cha
- Abstract summary: We investigate how several factors influence people's reactive attitudes towards machines, designers, and users.
Whether AI systems were explainable did not impact blame directed at them, their developers, and their users.
We discuss implications, such as how future decisions about including AI systems in the social and moral spheres will shape laypeople's reactions to AI-caused harm.
- Score: 11.960178399478721
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) systems can cause harm to people. This research
examines how individuals react to such harm through the lens of blame. Building
upon research suggesting that people blame AI systems, we investigated how
several factors influence people's reactive attitudes towards machines,
designers, and users. The results of three studies (N = 1,153) indicate
differences in how blame is attributed to these actors. Whether AI systems were
explainable did not impact blame directed at them, their developers, and their
users. Considerations about fairness and harmfulness increased blame towards
designers and users but had little to no effect on judgments of AI systems.
Instead, what determined people's reactive attitudes towards machines was
whether people thought blaming them would be a suitable response to algorithmic
harm. We discuss implications, such as how future decisions about including AI
systems in the social and moral spheres will shape laypeople's reactions to
AI-caused harm.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Rolling in the deep of cognitive and AI biases [1.556153237434314]
We argue that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed.
We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview.
We introduce a new mapping, which justifies the humans to AI biases and we detect relevant fairness intensities and inter-dependencies.
arXiv Detail & Related papers (2024-07-30T21:34:04Z) - Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI
Research [10.705827568946606]
We analyse 3,533 explainable AI (XAI) research articles from the Semantic Scholar Open Research Corpus (S2ORC)
We identify three dominant types of mind attribution: metaphorical (e.g. "to learn" or "to predict"), awareness (e.g. "to consider"), and (3) agency.
We find that participants who were given a mind-attributing explanation were more likely to rate the AI system as aware of the harm it caused.
Considering the AI experts' involvement lead to reduced ratings of AI responsibility for participants who were given a non-mind-attributing
arXiv Detail & Related papers (2023-12-19T12:49:32Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Playing the Blame Game with Robots [0.0]
We find that people are willing to ascribe moral blame to AI systems in contexts of recklessness.
The higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system.
arXiv Detail & Related papers (2021-02-08T20:53:42Z) - Towards AI Forensics: Did the Artificial Intelligence System Do It? [2.5991265608180396]
We focus on AI that is potentially malicious by design'' and grey box analysis.
Our evaluation using convolutional neural networks illustrates challenges and ideas for identifying malicious AI.
arXiv Detail & Related papers (2020-05-27T20:28:19Z) - Artificial Artificial Intelligence: Measuring Influence of AI
'Assessments' on Moral Decision-Making [48.66982301902923]
We examined the effect of feedback from false AI on moral decision-making about donor kidney allocation.
We found some evidence that judgments about whether a patient should receive a kidney can be influenced by feedback about participants' own decision-making perceived to be given by AI.
arXiv Detail & Related papers (2020-01-13T14:15:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.