Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness
Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making
- URL: http://arxiv.org/abs/2301.05809v1
- Date: Sat, 14 Jan 2023 02:51:01 GMT
- Title: Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness
Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making
- Authors: Shuai Ma, Ying Lei, Xinru Wang, Chengbo Zheng, Chuhan Shi, Ming Yin,
Xiaojuan Ma
- Abstract summary: In AI-assisted decision-making, it is critical for human decision-makers to know when to trust AI and when to trust themselves.
We modeled humans' CL by approximating their decision-making models and computing their potential performance in similar instances.
We proposed three CL exploitation strategies to calibrate users' trust explicitly/implicitly in the AI-assisted decision-making process.
- Score: 36.50604819969994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In AI-assisted decision-making, it is critical for human decision-makers to
know when to trust AI and when to trust themselves. However, prior studies
calibrated human trust only based on AI confidence indicating AI's correctness
likelihood (CL) but ignored humans' CL, hindering optimal team decision-making.
To mitigate this gap, we proposed to promote humans' appropriate trust based on
the CL of both sides at a task-instance level. We first modeled humans' CL by
approximating their decision-making models and computing their potential
performance in similar instances. We demonstrated the feasibility and
effectiveness of our model via two preliminary studies. Then, we proposed three
CL exploitation strategies to calibrate users' trust explicitly/implicitly in
the AI-assisted decision-making process. Results from a between-subjects
experiment (N=293) showed that our CL exploitation strategies promoted more
appropriate human trust in AI, compared with only using AI confidence. We
further provided practical implications for more human-compatible AI-assisted
decision-making.
Related papers
- Overconfident and Unconfident AI Hinder Human-AI Collaboration [5.480154202794587]
This study examines the effects of uncalibrated AI confidence on users' trust in AI, AI advice adoption, and collaboration outcomes.
Deficiency of trust calibration support exacerbates this issue by making it harder to detect uncalibrated confidence.
Our findings highlight the importance of AI confidence calibration for enhancing human-AI collaboration.
arXiv Detail & Related papers (2024-02-12T13:16:30Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Training Towards Critical Use: Learning to Situate AI Predictions
Relative to Human Knowledge [22.21959942886099]
We introduce a process-oriented notion of appropriate reliance called critical use that centers the human's ability to situate AI predictions against knowledge that is uniquely available to them but unavailable to the AI model.
We conduct a randomized online experiment in a complex social decision-making setting: child maltreatment screening.
We find that, by providing participants with accelerated, low-stakes opportunities to practice AI-assisted decision-making, novices came to exhibit patterns of disagreement with AI that resemble those of experienced workers.
arXiv Detail & Related papers (2023-08-30T01:54:31Z) - Best-Response Bayesian Reinforcement Learning with Bayes-adaptive POMDPs
for Centaurs [22.52332536886295]
We present a novel formulation of the interaction between the human and the AI as a sequential game.
We show that in this case the AI's problem of helping bounded-rational humans make better decisions reduces to a Bayes-adaptive POMDP.
We discuss ways in which the machine can learn to improve upon its own limitations as well with the help of the human.
arXiv Detail & Related papers (2022-04-03T21:00:51Z) - The Response Shift Paradigm to Quantify Human Trust in AI
Recommendations [6.652641137999891]
Explainability, interpretability and how much they affect human trust in AI systems are ultimately problems of human cognition as much as machine learning.
We developed and validated a general purpose Human-AI interaction paradigm which quantifies the impact of AI recommendations on human decisions.
Our proof-of-principle paradigm allows one to quantitatively compare the rapidly growing set of XAI/IAI approaches in terms of their effect on the end-user.
arXiv Detail & Related papers (2022-02-16T22:02:09Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.