Humans, AI, and Context: Understanding End-Users' Trust in a Real-World
Computer Vision Application
- URL: http://arxiv.org/abs/2305.08598v1
- Date: Mon, 15 May 2023 12:27:02 GMT
- Title: Humans, AI, and Context: Understanding End-Users' Trust in a Real-World
Computer Vision Application
- Authors: Sunnie S. Y. Kim and Elizabeth Anne Watkins and Olga Russakovsky and
Ruth Fong and Andr\'es Monroy-Hern\'andez
- Abstract summary: We provide a holistic and nuanced understanding of trust in AI through a qualitative case study of a real-world computer vision application.
We find participants perceived the app as trustworthy and trusted it, but selectively accepted app outputs after engaging in verification behaviors.
We discuss the implications of our findings and provide recommendations for future research on trust in AI.
- Score: 22.00514030715286
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trust is an important factor in people's interactions with AI systems.
However, there is a lack of empirical studies examining how real end-users
trust or distrust the AI system they interact with. Most research investigates
one aspect of trust in lab settings with hypothetical end-users. In this paper,
we provide a holistic and nuanced understanding of trust in AI through a
qualitative case study of a real-world computer vision application. We report
findings from interviews with 20 end-users of a popular, AI-based bird
identification app where we inquired about their trust in the app from many
angles. We find participants perceived the app as trustworthy and trusted it,
but selectively accepted app outputs after engaging in verification behaviors,
and decided against app adoption in certain high-stakes scenarios. We also find
domain knowledge and context are important factors for trust-related assessment
and decision-making. We discuss the implications of our findings and provide
recommendations for future research on trust in AI.
Related papers
- Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust [16.140485357046707]
We develop and validated a set of 27-item semantic differential scales for affective and cognitive trust.
Our empirical findings showed how the emotional and cognitive aspects of trust interact with each other and collectively shape a person's overall trust in AI agents.
arXiv Detail & Related papers (2024-07-25T18:55:33Z) - Trust in AI: Progress, Challenges, and Future Directions [6.724854390957174]
The increasing use of artificial intelligence (AI) systems in our daily life explains the significance of trust/distrust in AI from a user perspective.
Trust/distrust in AI plays the role of a regulator and could significantly control the level of this diffusion.
arXiv Detail & Related papers (2024-03-12T20:26:49Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Trust in AI and Its Role in the Acceptance of AI Technologies [12.175031903660972]
This paper explains the role of trust on the intention to use AI technologies.
Study 1 examined the role of trust in the use of AI voice assistants based on survey responses from college students.
Study 2, using data from a representative sample of the U.S. population, different dimensions of trust were examined.
arXiv Detail & Related papers (2022-03-23T19:18:19Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Trustworthy AI [4.670305538969914]
Trustworthy AI ups the ante on both trustworthy computing and formal methods.
Inspired by decades of progress in trustworthy computing, we suggest what trustworthy properties would be desired of AI systems.
arXiv Detail & Related papers (2020-02-14T22:45:36Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.