More Similar Values, More Trust? -- the Effect of Value Similarity on
Trust in Human-Agent Interaction
- URL: http://arxiv.org/abs/2105.09222v1
- Date: Wed, 19 May 2021 16:06:46 GMT
- Title: More Similar Values, More Trust? -- the Effect of Value Similarity on
Trust in Human-Agent Interaction
- Authors: Siddharth Mehrotra, Catholijn M. Jonker, Myrthe L. Tielman
- Abstract summary: This paper studies how human and agent Value Similarity (VS) influences a human's trust in that agent.
In a scenario-based experiment, 89 participants teamed up with five different agents, which were designed with varying levels of value similarity to that of the participants.
Our results show that agents rated as having more similar values also scored higher on trust, indicating a positive effect between the two.
- Score: 6.168444105072466
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As AI systems are increasingly involved in decision making, it also becomes
important that they elicit appropriate levels of trust from their users. To
achieve this, it is first important to understand which factors influence trust
in AI. We identify that a research gap exists regarding the role of personal
values in trust in AI. Therefore, this paper studies how human and agent Value
Similarity (VS) influences a human's trust in that agent. To explore this, 89
participants teamed up with five different agents, which were designed with
varying levels of value similarity to that of the participants. In a
within-subjects, scenario-based experiment, agents gave suggestions on what to
do when entering the building to save a hostage. We analyzed the agent's scores
on subjective value similarity, trust and qualitative data from open-ended
questions. Our results show that agents rated as having more similar values
also scored higher on trust, indicating a positive effect between the two. With
this result, we add to the existing understanding of human-agent trust by
providing insight into the role of value-similarity.
Related papers
- Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust [16.140485357046707]
We develop and validated a set of 27-item semantic differential scales for affective and cognitive trust.
Our empirical findings showed how the emotional and cognitive aspects of trust interact with each other and collectively shape a person's overall trust in AI agents.
arXiv Detail & Related papers (2024-07-25T18:55:33Z) - Can Large Language Model Agents Simulate Human Trust Behavior? [81.45930976132203]
We investigate whether Large Language Model (LLM) agents can simulate human trust behavior.
GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior.
We also probe the biases of agent trust and differences in agent trust towards other LLM agents and humans.
arXiv Detail & Related papers (2024-02-07T03:37:19Z) - DCIR: Dynamic Consistency Intrinsic Reward for Multi-Agent Reinforcement
Learning [84.22561239481901]
We propose a new approach that enables agents to learn whether their behaviors should be consistent with that of other agents.
We evaluate DCIR in multiple environments including Multi-agent Particle, Google Research Football and StarCraft II Micromanagement.
arXiv Detail & Related papers (2023-12-10T06:03:57Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Why not both? Complementing explanations with uncertainty, and the role
of self-confidence in Human-AI collaboration [12.47276164048813]
We conduct an empirical study to identify how uncertainty estimates and model explanations affect users' reliance, understanding, and trust towards a model.
We also discuss how the latter may distort the outcome of an analysis based on agreement and switching percentages.
arXiv Detail & Related papers (2023-04-27T12:24:33Z) - Trust and Transparency in Recommender Systems [0.0]
We first go through different understandings and measurements of trust in the AI and RS community, such as demonstrated and perceived trust.
We then review the relationsships between trust and transparency, as well as mental models, and investigate different strategies to achieve transparency in RS.
arXiv Detail & Related papers (2023-04-17T09:09:48Z) - Trust and Reliance in XAI -- Distinguishing Between Attitudinal and
Behavioral Measures [0.0]
Researchers argue that AI should be more transparent to increase trust, making transparency one of the main goals of XAI.
empirical research on this topic is inconclusive regarding the effect of transparency on trust.
We advocate for a clear distinction between behavioral (objective) measures of reliance and attitudinal (subjective) measures of trust.
arXiv Detail & Related papers (2022-03-23T10:39:39Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.