Human Perceptions on Moral Responsibility of AI: A Case Study in
AI-Assisted Bail Decision-Making
- URL: http://arxiv.org/abs/2102.00625v1
- Date: Mon, 1 Feb 2021 04:07:38 GMT
- Title: Human Perceptions on Moral Responsibility of AI: A Case Study in
AI-Assisted Bail Decision-Making
- Authors: Gabriel Lima, Nina Grgi\'c-Hla\v{c}a, Meeyoung Cha
- Abstract summary: We measure people's perceptions of eight different notions of moral responsibility concerning AI and human agents.
We show that AI agents are held causally responsible and blamed similarly to human agents for an identical task.
We find that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature.
- Score: 8.688778020322758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How to attribute responsibility for autonomous artificial intelligence (AI)
systems' actions has been widely debated across the humanities and social
science disciplines. This work presents two experiments ($N$=200 each) that
measure people's perceptions of eight different notions of moral responsibility
concerning AI and human agents in the context of bail decision-making. Using
real-life adapted vignettes, our experiments show that AI agents are held
causally responsible and blamed similarly to human agents for an identical
task. However, there was a meaningful difference in how people perceived these
agents' moral responsibility; human agents were ascribed to a higher degree of
present-looking and forward-looking notions of responsibility than AI agents.
We also found that people expect both AI and human decision-makers and advisors
to justify their decisions regardless of their nature. We discuss policy and
HCI implications of these findings, such as the need for explainable AI in
high-stakes scenarios.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Responsible AI Research Needs Impact Statements Too [51.37368267352821]
Work in responsible artificial intelligence (RAI), ethical AI, or ethics in AI is no exception.
Work in responsible artificial intelligence (RAI), ethical AI, or ethics in AI is no exception.
arXiv Detail & Related papers (2023-11-20T14:02:28Z) - Bad, mad, and cooked: Moral responsibility for civilian harms in
human-AI military teams [0.0]
This chapter explores moral responsibility for civilian harms by human-artificial intelligence (AI) teams.
increasingly militaries may 'cook' their good apples by putting them in untenable decision-making environments.
This chapter offers new mechanisms to map out conditions for moral responsibility in human-AI teams.
arXiv Detail & Related papers (2022-10-31T10:18:20Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Meaningful human control over AI systems: beyond talking the talk [8.351027101823705]
We identify four properties which AI-based systems must have to be under meaningful human control.
First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations.
Second, humans and AI agents within the system should have appropriate and mutually compatible representations.
Third, responsibility attributed to a human should be commensurate with that human's ability and authority to control the system.
arXiv Detail & Related papers (2021-11-25T11:05:37Z) - An Ethical Framework for Guiding the Development of Affectively-Aware
Artificial Intelligence [0.0]
We propose guidelines for evaluating the (moral and) ethical consequences of affectively-aware AI.
We propose a multi-stakeholder analysis framework that separates the ethical responsibilities of AI Developers vis-a-vis the entities that deploy such AI.
We end with recommendations for researchers, developers, operators, as well as regulators and law-makers.
arXiv Detail & Related papers (2021-07-29T03:57:53Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - The human-AI relationship in decision-making: AI explanation to support
people on justifying their decisions [4.169915659794568]
People need more awareness of how AI works and its outcomes to build a relationship with that system.
In decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system.
arXiv Detail & Related papers (2021-02-10T14:28:34Z) - Hiding Behind Machines: When Blame Is Shifted to Artificial Agents [0.0]
This article focuses on the responsibility of agents who decide on our behalf.
We investigate whether the production of moral outcomes by an agent is systematically judged differently when the agent is artificial and not human.
arXiv Detail & Related papers (2021-01-27T14:50:02Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.