A Mental Model Based Theory of Trust
- URL: http://arxiv.org/abs/2301.12569v1
- Date: Sun, 29 Jan 2023 22:36:37 GMT
- Title: A Mental Model Based Theory of Trust
- Authors: Zahra Zahedi, Sarath Sreedharan, Subbarao Kambhampati
- Abstract summary: We propose a mental model based theory of trust that can be used to infer trust.
We then use the theory to define trust evolution, human reliance and decision making, and a formalization of the appropriate level of trust in the agent.
- Score: 31.14516396625931
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Handling trust is one of the core requirements for facilitating effective
interaction between the human and the AI agent. Thus, any decision-making
framework designed to work with humans must possess the ability to estimate and
leverage human trust. In this paper, we propose a mental model based theory of
trust that not only can be used to infer trust, thus providing an alternative
to psychological or behavioral trust inference methods, but also can be used as
a foundation for any trust-aware decision-making frameworks. First, we
introduce what trust means according to our theory and then use the theory to
define trust evolution, human reliance and decision making, and a formalization
of the appropriate level of trust in the agent. Using human subject studies, we
compare our theory against one of the most common trust scales (Muir scale) to
evaluate 1) whether the observations from the human studies match our proposed
theory and 2) what aspects of trust are more aligned with our proposed theory.
Related papers
- Can Large Language Model Agents Simulate Human Trust Behavior? [81.45930976132203]
We investigate whether Large Language Model (LLM) agents can simulate human trust behavior.
GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior.
We also probe the biases of agent trust and differences in agent trust towards other LLM agents and humans.
arXiv Detail & Related papers (2024-02-07T03:37:19Z) - Towards Machines that Trust: AI Agents Learn to Trust in the Trust Game [11.788352764861369]
We present a theoretical analysis of the $textittrust game$, the canonical task for studying trust in behavioral and brain sciences.
Specifically, leveraging reinforcement learning to train our AI agents, we investigate learning trust under various parameterizations of this task.
Our theoretical analysis, corroborated by the simulations results presented, provides a mathematical basis for the emergence of trust in the trust game.
arXiv Detail & Related papers (2023-12-20T09:32:07Z) - Decoding trust: A reinforcement learning perspective [11.04265850036115]
Behavioral experiments on the trust game have shown that trust and trustworthiness are universal among human beings.
We turn to the paradigm of reinforcement learning, where individuals update their strategies by evaluating the long-term return through accumulated experience.
In the pairwise scenario, we reveal that high levels of trust and trustworthiness emerge when individuals appreciate both their historical experience and returns in the future.
arXiv Detail & Related papers (2023-09-26T01:06:29Z) - Distrust in (X)AI -- Measurement Artifact or Distinct Construct? [0.0]
Trust is a key motivation in developing explainable artificial intelligence (XAI)
Distrust seems relatively understudied in XAI.
psychometric evidence favors a distinction between trust and distrust.
arXiv Detail & Related papers (2023-03-29T07:14:54Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Trust in AI: Interpretability is not necessary or sufficient, while
black-box interaction is necessary and sufficient [0.0]
The problem of human trust in artificial intelligence is one of the most fundamental problems in applied machine learning.
We draw from statistical learning theory and sociological lenses on human-automation trust to motivate an AI-as-tool framework.
We clarify the role of interpretability in trust with a ladder of model access.
arXiv Detail & Related papers (2022-02-10T19:59:23Z) - Insights into Fairness through Trust: Multi-scale Trust Quantification
for Financial Deep Learning [94.65749466106664]
A fundamental aspect of fairness that has not been explored in financial deep learning is the concept of trust.
We conduct multi-scale trust quantification on a deep neural network for the purpose of credit card default prediction.
arXiv Detail & Related papers (2020-11-03T19:05:07Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.