A Diachronic Perspective on User Trust in AI under Uncertainty
- URL: http://arxiv.org/abs/2310.13544v1
- Date: Fri, 20 Oct 2023 14:41:46 GMT
- Title: A Diachronic Perspective on User Trust in AI under Uncertainty
- Authors: Shehzaad Dhuliawala, Vil\'em Zouhar, Mennatallah El-Assady, Mrinmaya
Sachan
- Abstract summary: Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
- Score: 52.44939679369428
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a human-AI collaboration, users build a mental model of the AI system
based on its reliability and how it presents its decision, e.g. its
presentation of system confidence and an explanation of the output. Modern NLP
systems are often uncalibrated, resulting in confidently incorrect predictions
that undermine user trust. In order to build trustworthy AI, we must understand
how user trust is developed and how it can be regained after potential
trust-eroding events. We study the evolution of user trust in response to these
trust-eroding events using a betting game. We find that even a few incorrect
instances with inaccurate confidence estimates damage user trust and
performance, with very slow recovery. We also show that this degradation in
trust reduces the success of human-AI collaboration and that different types of
miscalibration -- unconfidently correct and confidently incorrect -- have
different negative effects on user trust. Our findings highlight the importance
of calibration in user-facing AI applications and shed light on what aspects
help users decide whether to trust the AI system.
Related papers
- Fostering Trust and Quantifying Value of AI and ML [0.0]
Much has been discussed about trusting AI and ML inferences, but little has been done to define what that means.
producing ever more trustworthy machine learning inferences is a path to increase the value of products.
arXiv Detail & Related papers (2024-07-08T13:25:28Z) - Overconfident and Unconfident AI Hinder Human-AI Collaboration [5.480154202794587]
This study examines the effects of uncalibrated AI confidence on users' trust in AI, AI advice adoption, and collaboration outcomes.
Deficiency of trust calibration support exacerbates this issue by making it harder to detect uncalibrated confidence.
Our findings highlight the importance of AI confidence calibration for enhancing human-AI collaboration.
arXiv Detail & Related papers (2024-02-12T13:16:30Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z) - Towards Time-Aware Context-Aware Deep Trust Prediction in Online Social
Networks [0.4061135251278187]
Trust can be defined as a measure to determine which source of information is reliable and with whom we should share or from whom we should accept information.
There are several applications for trust in Online Social Networks (OSNs), including social spammer detection, fake news detection, retweet behaviour detection and recommender systems.
Trust prediction is the process of predicting a new trust relation between two users who are not currently connected.
arXiv Detail & Related papers (2020-03-21T01:00:02Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.