Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI
Interactions
- URL: http://arxiv.org/abs/2107.07015v1
- Date: Wed, 14 Jul 2021 21:33:14 GMT
- Title: Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI
Interactions
- Authors: Kailas Vodrahalli, Tobias Gerstenberg, James Zou
- Abstract summary: We characterize how humans use AI suggestions relative to equivalent suggestions from a group of peer humans.
We find that participants' beliefs about the human versus AI performance on a given task affects whether or not they heed the advice.
- Score: 8.785345834486057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many applications of AI, the algorithm's output is framed as a suggestion
to a human user. The user may ignore the advice or take it into consideration
to modify his/her decisions. With the increasing prevalence of such human-AI
interactions, it is important to understand how users act (or do not act) upon
AI advice, and how users regard advice differently if they believe the advice
come from an "AI" versus another human. In this paper, we characterize how
humans use AI suggestions relative to equivalent suggestions from a group of
peer humans across several experimental settings. We find that participants'
beliefs about the human versus AI performance on a given task affects whether
or not they heed the advice. When participants decide to use the advice, they
do so similarly for human and AI suggestions. These results provide insights
into factors that affect human-AI interactions.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - Improving Human-AI Collaboration With Descriptions of AI Behavior [14.904401331154062]
People work with AI systems to improve their decision making, but often under- or over-rely on AI predictions and perform worse than they would have unassisted.
To help people appropriately rely on AI aids, we propose showing them behavior descriptions.
arXiv Detail & Related papers (2023-01-06T00:33:08Z) - Should I Follow AI-based Advice? Measuring Appropriate Reliance in
Human-AI Decision-Making [0.0]
We aim to enable humans not to rely on AI advice blindly but rather to distinguish its quality and act upon it to make better decisions.
Current research lacks a metric for appropriate reliance (AR) on AI advice on a case-by-case basis.
We propose to view AR as a two-dimensional construct that measures the ability to discriminate advice quality and behave accordingly.
arXiv Detail & Related papers (2022-04-14T12:18:51Z) - The Response Shift Paradigm to Quantify Human Trust in AI
Recommendations [6.652641137999891]
Explainability, interpretability and how much they affect human trust in AI systems are ultimately problems of human cognition as much as machine learning.
We developed and validated a general purpose Human-AI interaction paradigm which quantifies the impact of AI recommendations on human decisions.
Our proof-of-principle paradigm allows one to quantitatively compare the rapidly growing set of XAI/IAI approaches in terms of their effect on the end-user.
arXiv Detail & Related papers (2022-02-16T22:02:09Z) - Uncalibrated Models Can Improve Human-AI Collaboration [10.106324182884068]
We show that presenting AI models as more confident than they actually are can improve human-AI performance.
We first learn a model for how humans incorporate AI advice using data from thousands of human interactions.
arXiv Detail & Related papers (2022-02-12T04:51:00Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The corruptive force of AI-generated advice [0.0]
We test whether AI-generated advice can corrupt people.
We also test whether transparency about AI presence mitigates potential harm.
Results reveal that AI's corrupting force is as strong as humans'
arXiv Detail & Related papers (2021-02-15T13:15:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.