More Similar Values, More Trust? -- the Effect of Value Similarity on
Trust in Human-Agent Interaction
- URL: http://arxiv.org/abs/2105.09222v1
- Date: Wed, 19 May 2021 16:06:46 GMT
- Title: More Similar Values, More Trust? -- the Effect of Value Similarity on
Trust in Human-Agent Interaction
- Authors: Siddharth Mehrotra, Catholijn M. Jonker, Myrthe L. Tielman
- Abstract summary: This paper studies how human and agent Value Similarity (VS) influences a human's trust in that agent.
In a scenario-based experiment, 89 participants teamed up with five different agents, which were designed with varying levels of value similarity to that of the participants.
Our results show that agents rated as having more similar values also scored higher on trust, indicating a positive effect between the two.
- Score: 6.168444105072466
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As AI systems are increasingly involved in decision making, it also becomes
important that they elicit appropriate levels of trust from their users. To
achieve this, it is first important to understand which factors influence trust
in AI. We identify that a research gap exists regarding the role of personal
values in trust in AI. Therefore, this paper studies how human and agent Value
Similarity (VS) influences a human's trust in that agent. To explore this, 89
participants teamed up with five different agents, which were designed with
varying levels of value similarity to that of the participants. In a
within-subjects, scenario-based experiment, agents gave suggestions on what to
do when entering the building to save a hostage. We analyzed the agent's scores
on subjective value similarity, trust and qualitative data from open-ended
questions. Our results show that agents rated as having more similar values
also scored higher on trust, indicating a positive effect between the two. With
this result, we add to the existing understanding of human-agent trust by
providing insight into the role of value-similarity.
Related papers
- ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models [53.00812898384698]
We argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking.
We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert.
We propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars --Consistency, Scoring Critera, Differentiating, User Experience, Responsible, and Scalability.
arXiv Detail & Related papers (2024-05-28T22:45:28Z) - Can Large Language Model Agents Simulate Human Trust Behaviors? [75.69583811834073]
Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in applications such as social science.
In this paper, we focus on one of the most critical behaviors in human interactions, trust, and aim to investigate whether or not LLM agents can simulate human trust behaviors.
arXiv Detail & Related papers (2024-02-07T03:37:19Z) - DCIR: Dynamic Consistency Intrinsic Reward for Multi-Agent Reinforcement
Learning [84.22561239481901]
We propose a new approach that enables agents to learn whether their behaviors should be consistent with that of other agents.
We evaluate DCIR in multiple environments including Multi-agent Particle, Google Research Football and StarCraft II Micromanagement.
arXiv Detail & Related papers (2023-12-10T06:03:57Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Why not both? Complementing explanations with uncertainty, and the role
of self-confidence in Human-AI collaboration [12.47276164048813]
We conduct an empirical study to identify how uncertainty estimates and model explanations affect users' reliance, understanding, and trust towards a model.
We also discuss how the latter may distort the outcome of an analysis based on agreement and switching percentages.
arXiv Detail & Related papers (2023-04-27T12:24:33Z) - Trust and Transparency in Recommender Systems [0.0]
We first go through different understandings and measurements of trust in the AI and RS community, such as demonstrated and perceived trust.
We then review the relationsships between trust and transparency, as well as mental models, and investigate different strategies to achieve transparency in RS.
arXiv Detail & Related papers (2023-04-17T09:09:48Z) - My Actions Speak Louder Than Your Words: When User Behavior Predicts
Their Beliefs about Agents' Attributes [5.893351309010412]
Behavioral science suggests that people sometimes use irrelevant information.
We identify an instance of this phenomenon, where users who experience better outcomes in a human-agent interaction systematically rated the agent as having better abilities, being more benevolent, and exhibiting greater integrity in a post hoc assessment than users who experienced worse outcome -- which were the result of their own behavior -- with the same agent.
Our analyses suggest the need for augmentation of models so that they account for such biased perceptions as well as mechanisms so that agents can detect and even actively work to correct this and similar biases of users.
arXiv Detail & Related papers (2023-01-21T21:26:32Z) - Trust and Reliance in XAI -- Distinguishing Between Attitudinal and
Behavioral Measures [0.0]
Researchers argue that AI should be more transparent to increase trust, making transparency one of the main goals of XAI.
empirical research on this topic is inconclusive regarding the effect of transparency on trust.
We advocate for a clear distinction between behavioral (objective) measures of reliance and attitudinal (subjective) measures of trust.
arXiv Detail & Related papers (2022-03-23T10:39:39Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.