Trust in Human-AI Interaction: Scoping Out Models, Measures, and Methods
- URL: http://arxiv.org/abs/2205.00189v1
- Date: Sat, 30 Apr 2022 07:34:19 GMT
- Title: Trust in Human-AI Interaction: Scoping Out Models, Measures, and Methods
- Authors: Takane Ueno, Yuto Sawa, Yeongdae Kim, Jacqueline Urakami, Hiroki Oura,
Katie Seaborn
- Abstract summary: Trust has emerged as a key factor in people's interactions with AI-infused systems.
Little is known about what models of trust have been used and for what systems.
There is yet no known standard approach to measuring trust in AI.
- Score: 12.641141743223377
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Trust has emerged as a key factor in people's interactions with AI-infused
systems. Yet, little is known about what models of trust have been used and for
what systems: robots, virtual characters, smart vehicles, decision aids, or
others. Moreover, there is yet no known standard approach to measuring trust in
AI. This scoping review maps out the state of affairs on trust in human-AI
interaction (HAII) from the perspectives of models, measures, and methods.
Findings suggest that trust is an important and multi-faceted topic of study
within HAII contexts. However, most work is under-theorized and under-reported,
generally not using established trust models and missing details about methods,
especially Wizard of Oz. We offer several targets for systematic review work as
well as a research agenda for combining the strengths and addressing the
weaknesses of the current literature.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust [16.140485357046707]
We develop and validated a set of 27-item semantic differential scales for affective and cognitive trust.
Our empirical findings showed how the emotional and cognitive aspects of trust interact with each other and collectively shape a person's overall trust in AI agents.
arXiv Detail & Related papers (2024-07-25T18:55:33Z) - Trust in AI: Progress, Challenges, and Future Directions [6.724854390957174]
The increasing use of artificial intelligence (AI) systems in our daily life explains the significance of trust/distrust in AI from a user perspective.
Trust/distrust in AI plays the role of a regulator and could significantly control the level of this diffusion.
arXiv Detail & Related papers (2024-03-12T20:26:49Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - A Review of the Role of Causality in Developing Trustworthy AI Systems [16.267806768096026]
State-of-the-art AI models largely lack an understanding of the cause-effect relationship that governs human understanding of the real world.
Recently, causal modeling and inference methods have emerged as powerful tools to improve the trustworthiness aspects of AI models.
arXiv Detail & Related papers (2023-02-14T11:08:26Z) - Are Neural Topic Models Broken? [81.15470302729638]
We study the relationship between automated and human evaluation of topic models.
We find that neural topic models fare worse in both respects compared to an established classical method.
arXiv Detail & Related papers (2022-10-28T14:38:50Z) - Improving Model Understanding and Trust with Counterfactual Explanations
of Model Confidence [4.385390451313721]
Showing confidence scores in human-agent interaction systems can help build trust between humans and AI systems.
Most existing research only used the confidence score as a form of communication.
This paper presents two methods for understanding model confidence using counterfactual explanation.
arXiv Detail & Related papers (2022-06-06T04:04:28Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - LaMDA: Language Models for Dialog Applications [75.75051929981933]
LaMDA is a family of Transformer-based neural language models specialized for dialog.
Fine-tuning with annotated data and enabling the model to consult external knowledge sources can lead to significant improvements.
arXiv Detail & Related papers (2022-01-20T15:44:37Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - Statistical Perspectives on Reliability of Artificial Intelligence
Systems [6.284088451820049]
We provide statistical perspectives on the reliability of AI systems.
We introduce a so-called SMART statistical framework for AI reliability research.
We discuss recent developments in modeling and analysis of AI reliability.
arXiv Detail & Related papers (2021-11-09T20:00:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.