Will We Trust What We Don't Understand? Impact of Model Interpretability
and Outcome Feedback on Trust in AI
- URL: http://arxiv.org/abs/2111.08222v1
- Date: Tue, 16 Nov 2021 04:35:34 GMT
- Title: Will We Trust What We Don't Understand? Impact of Model Interpretability
and Outcome Feedback on Trust in AI
- Authors: Daehwan Ahn (1), Abdullah Almaatouq (2), Monisha Gulabani (1), Kartik
Hosanagar (1) ((1) The Wharton School, University of Pennsylvania (2) Sloan
School of Management, Massachusetts Institute of Technology)
- Abstract summary: We analyze the impact of interpretability and outcome feedback on trust in AI and on human performance in AI-assisted prediction tasks.
We find that interpretability led to no robust improvements in trust, while outcome feedback had a significantly greater and more reliable effect.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite AI's superhuman performance in a variety of domains, humans are often
unwilling to adopt AI systems. The lack of interpretability inherent in many
modern AI techniques is believed to be hurting their adoption, as users may not
trust systems whose decision processes they do not understand. We investigate
this proposition with a novel experiment in which we use an interactive
prediction task to analyze the impact of interpretability and outcome feedback
on trust in AI and on human performance in AI-assisted prediction tasks. We
find that interpretability led to no robust improvements in trust, while
outcome feedback had a significantly greater and more reliable effect. However,
both factors had modest effects on participants' task performance. Our findings
suggest that (1) factors receiving significant attention, such as
interpretability, may be less effective at increasing trust than factors like
outcome feedback, and (2) augmenting human performance via AI systems may not
be a simple matter of increasing trust in AI, as increased trust is not always
associated with equally sizable improvements in performance. These findings
invite the research community to focus not only on methods for generating
interpretations but also on techniques for ensuring that interpretations impact
trust and performance in practice.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems [11.690126756498223]
Vision for optimal human-AI collaboration requires 'appropriate reliance' of humans on AI systems.
In practice, the performance disparity of machine learning models on out-of-distribution data makes dataset-specific performance feedback unreliable.
arXiv Detail & Related papers (2024-09-22T09:43:27Z) - Trust in AI: Progress, Challenges, and Future Directions [6.724854390957174]
The increasing use of artificial intelligence (AI) systems in our daily life explains the significance of trust/distrust in AI from a user perspective.
Trust/distrust in AI plays the role of a regulator and could significantly control the level of this diffusion.
arXiv Detail & Related papers (2024-03-12T20:26:49Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Knowing About Knowing: An Illusion of Human Competence Can Hinder
Appropriate Reliance on AI Systems [13.484359389266864]
This paper addresses whether the Dunning-Kruger Effect (DKE) can hinder appropriate reliance on AI systems.
DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance.
We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems.
arXiv Detail & Related papers (2023-01-25T14:26:10Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Trust in AI and Its Role in the Acceptance of AI Technologies [12.175031903660972]
This paper explains the role of trust on the intention to use AI technologies.
Study 1 examined the role of trust in the use of AI voice assistants based on survey responses from college students.
Study 2, using data from a representative sample of the U.S. population, different dimensions of trust were examined.
arXiv Detail & Related papers (2022-03-23T19:18:19Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.