Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI
- URL: http://arxiv.org/abs/2010.07487v3
- Date: Wed, 20 Jan 2021 12:24:26 GMT
- Title: Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI
- Authors: Alon Jacovi, Ana Marasovi\'c, Tim Miller, Yoav Goldberg
- Abstract summary: We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
- Score: 55.4046755826066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trust is a central component of the interaction between people and AI, in
that 'incorrect' levels of trust may cause misuse, abuse or disuse of the
technology. But what, precisely, is the nature of trust in AI? What are the
prerequisites and goals of the cognitive mechanism of trust, and how can we
promote them, or assess whether they are being satisfied in a given
interaction? This work aims to answer these questions. We discuss a model of
trust inspired by, but not identical to, sociology's interpersonal trust (i.e.,
trust between people). This model rests on two key properties of the
vulnerability of the user and the ability to anticipate the impact of the AI
model's decisions. We incorporate a formalization of 'contractual trust', such
that trust between a user and an AI is trust that some implicit or explicit
contract will hold, and a formalization of 'trustworthiness' (which detaches
from the notion of trustworthiness in sociology), and with it concepts of
'warranted' and 'unwarranted' trust. We then present the possible causes of
warranted trust as intrinsic reasoning and extrinsic behavior, and discuss how
to design trustworthy AI, how to evaluate whether trust has manifested, and
whether it is warranted. Finally, we elucidate the connection between trust and
XAI using our formalization.
Related papers
- Trust in AI: Progress, Challenges, and Future Directions [6.724854390957174]
The increasing use of artificial intelligence (AI) systems in our daily life explains the significance of trust/distrust in AI from a user perspective.
Trust/distrust in AI plays the role of a regulator and could significantly control the level of this diffusion.
arXiv Detail & Related papers (2024-03-12T20:26:49Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Who to Trust, How and Why: Untangling AI Ethics Principles,
Trustworthiness and Trust [0.0]
We argue for the need to distinguish these concepts more clearly.
We discuss that trust in AI involves not only reliance on the system itself, but also trust in the developers of the AI system.
arXiv Detail & Related papers (2023-09-19T05:00:34Z) - Distrust in (X)AI -- Measurement Artifact or Distinct Construct? [0.0]
Trust is a key motivation in developing explainable artificial intelligence (XAI)
Distrust seems relatively understudied in XAI.
psychometric evidence favors a distinction between trust and distrust.
arXiv Detail & Related papers (2023-03-29T07:14:54Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Trust in AI: Interpretability is not necessary or sufficient, while
black-box interaction is necessary and sufficient [0.0]
The problem of human trust in artificial intelligence is one of the most fundamental problems in applied machine learning.
We draw from statistical learning theory and sociological lenses on human-automation trust to motivate an AI-as-tool framework.
We clarify the role of interpretability in trust with a ladder of model access.
arXiv Detail & Related papers (2022-02-10T19:59:23Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.