From the Head or the Heart? An Experimental Design on the Impact of
Explanation on Cognitive and Affective Trust
- URL: http://arxiv.org/abs/2110.03433v1
- Date: Thu, 7 Oct 2021 13:15:34 GMT
- Title: From the Head or the Heart? An Experimental Design on the Impact of
Explanation on Cognitive and Affective Trust
- Authors: Qiaoning Zhang, X. Jessie Yang, Lionel P. Robert Jr
- Abstract summary: This study investigates the effectiveness of explanations on both cognitive and affective trust.
We expect these results to be of great significance in designing AV explanations to promote AV trust.
- Score: 13.274877222689168
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated vehicles (AVs) are social robots that can potentially benefit our
society. According to the existing literature, AV explanations can promote
passengers' trust by reducing the uncertainty associated with the AV's
reasoning and actions. However, the literature on AV explanations and trust has
failed to consider how the type of trust
- cognitive versus affective - might alter this relationship. Yet, the
existing literature has shown that the implications associated with trust vary
widely depending on whether it is cognitive or affective. To address this
shortcoming and better understand the impacts of explanations on trust in AVs,
we designed a study to investigate the effectiveness of explanations on both
cognitive and affective trust. We expect these results to be of great
significance in designing AV explanations to promote AV trust.
Related papers
- Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine Learning [7.106124530294562]
Low trust remains a significant barrier to Autonomous Vehicle (AV) adoption.
We use machine learning to understand the most important factors that contribute to young adult trust.
arXiv Detail & Related papers (2024-09-13T16:52:24Z) - What Did My Car Say? Impact of Autonomous Vehicle Explanation Errors and Driving Context On Comfort, Reliance, Satisfaction, and Driving Confidence [7.623776951753322]
We tested how autonomous vehicle (AV) explanation errors affected a passenger's comfort in relying on an AV.
Despite identical driving, explanation errors reduced ratings of the AV's driving ability.
Prior trust and expertise were positively associated with outcome ratings.
arXiv Detail & Related papers (2024-09-09T15:41:53Z) - Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust [16.140485357046707]
We develop and validated a set of 27-item semantic differential scales for affective and cognitive trust.
Our empirical findings showed how the emotional and cognitive aspects of trust interact with each other and collectively shape a person's overall trust in AI agents.
arXiv Detail & Related papers (2024-07-25T18:55:33Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Adversarial Visual Robustness by Causal Intervention [56.766342028800445]
Adversarial training is the de facto most promising defense against adversarial examples.
Yet, its passive nature inevitably prevents it from being immune to unknown attackers.
We provide a causal viewpoint of adversarial vulnerability: the cause is the confounder ubiquitously existing in learning.
arXiv Detail & Related papers (2021-06-17T14:23:54Z) - A Study on the Manifestation of Trust in Speech [12.057694908317991]
We explore the feasibility of automatically detecting the level of trust that a user has on a virtual assistant (VA) based on their speech.
We developed a novel protocol for collecting speech data from subjects induced to have different degrees of trust in the skills of a VA.
We show clear evidence that the protocol effectively succeeded in influencing subjects into the desired mental state of either trusting or distrusting the agent's skills.
arXiv Detail & Related papers (2021-02-09T13:08:54Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.