Converging Measures and an Emergent Model: A Meta-Analysis of
Human-Automation Trust Questionnaires
- URL: http://arxiv.org/abs/2303.13799v1
- Date: Fri, 24 Mar 2023 04:42:49 GMT
- Title: Converging Measures and an Emergent Model: A Meta-Analysis of
Human-Automation Trust Questionnaires
- Authors: Yosef S. Razin and Karen M. Feigh
- Abstract summary: We identify the most frequently cited and best-validated human-automation and human-robot trust questionnaires.
We demonstrate a convergent experimentally validated model of human-automation trust.
- Score: 0.6853165736531939
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A significant challenge to measuring human-automation trust is the amount of
construct proliferation, models, and questionnaires with highly variable
validation. However, all agree that trust is a crucial element of technological
acceptance, continued usage, fluency, and teamwork. Herein, we synthesize a
consensus model for trust in human-automation interaction by performing a
meta-analysis of validated and reliable trust survey instruments. To accomplish
this objective, this work identifies the most frequently cited and
best-validated human-automation and human-robot trust questionnaires, as well
as the most well-established factors, which form the dimensions and antecedents
of such trust. To reduce both confusion and construct proliferation, we provide
a detailed mapping of terminology between questionnaires. Furthermore, we
perform a meta-analysis of the regression models that emerged from those
experiments which used multi-factorial survey instruments. Based on this
meta-analysis, we demonstrate a convergent experimentally validated model of
human-automation trust. This convergent model establishes an integrated
framework for future research. It identifies the current boundaries of trust
measurement and where further investigation is necessary. We close by
discussing choosing and designing an appropriate trust survey instrument. By
comparing, mapping, and analyzing well-constructed trust survey instruments, a
consensus structure of trust in human-automation interaction is identified.
Doing so discloses a more complete basis for measuring trust emerges that is
widely applicable. It integrates the academic idea of trust with the
colloquial, common-sense one. Given the increasingly recognized importance of
trust, especially in human-automation interaction, this work leaves us better
positioned to understand and measure it.
Related papers
- Enhancing Answer Reliability Through Inter-Model Consensus of Large Language Models [1.6874375111244329]
We explore the collaborative dynamics of an innovative language model interaction system involving advanced models.
These models generate and answer complex, PhD-level statistical questions without exact ground-truth answers.
Our study investigates how inter-model consensus enhances the reliability and precision of responses.
arXiv Detail & Related papers (2024-11-25T10:18:17Z) - Bayesian Methods for Trust in Collaborative Multi-Agent Autonomy [11.246557832016238]
In safety-critical and contested environments, adversaries may infiltrate and compromise a number of agents.
We analyze state of the art multi-target tracking algorithms under this compromised agent threat model.
We design a trust estimation framework using hierarchical Bayesian updating.
arXiv Detail & Related papers (2024-03-25T17:17:35Z) - A Systematic Review on Fostering Appropriate Trust in Human-AI
Interaction [19.137907393497848]
Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners.
Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communication.
This paper presents a systematic review to identify current practices in building appropriate trust, different ways to measure it, types of tasks used, and potential challenges associated with it.
arXiv Detail & Related papers (2023-11-08T12:19:58Z) - KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph
Neural Networks [63.531790269009704]
Social Internet of Things (SIoT) is a promising and emerging paradigm that injects the notion of social networking into smart objects (i.e., things)
Due to the risks and uncertainty, a crucial and urgent problem to be settled is establishing reliable relationships within SIoT, that is, trust evaluation.
We propose a novel knowledge-enhanced graph neural network (KGTrust) for better trust evaluation in SIoT.
arXiv Detail & Related papers (2023-02-22T14:24:45Z) - Improving Model Understanding and Trust with Counterfactual Explanations
of Model Confidence [4.385390451313721]
Showing confidence scores in human-agent interaction systems can help build trust between humans and AI systems.
Most existing research only used the confidence score as a form of communication.
This paper presents two methods for understanding model confidence using counterfactual explanation.
arXiv Detail & Related papers (2022-06-06T04:04:28Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Trust in Human-AI Interaction: Scoping Out Models, Measures, and Methods [12.641141743223377]
Trust has emerged as a key factor in people's interactions with AI-infused systems.
Little is known about what models of trust have been used and for what systems.
There is yet no known standard approach to measuring trust in AI.
arXiv Detail & Related papers (2022-04-30T07:34:19Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.