On the Relation of Trust and Explainability: Why to Engineer for
Trustworthiness
- URL: http://arxiv.org/abs/2108.05379v2
- Date: Fri, 13 Aug 2021 10:35:05 GMT
- Title: On the Relation of Trust and Explainability: Why to Engineer for
Trustworthiness
- Authors: Lena K\"astner, Markus Langer, Veronika Lazar, Astrid Schom\"acker,
Timo Speith, Sarah Sterz
- Abstract summary: One of the primary motivators for such requirements is that explainability is expected to facilitate stakeholders' trust in a system.
Recent psychological studies indicate that explanations do not necessarily facilitate trust.
We argue that even though trustworthiness does not automatically lead to trust, there are several reasons to engineer primarily for trustworthiness.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, requirements for the explainability of software systems have gained
prominence. One of the primary motivators for such requirements is that
explainability is expected to facilitate stakeholders' trust in a system.
Although this seems intuitively appealing, recent psychological studies
indicate that explanations do not necessarily facilitate trust. Thus,
explainability requirements might not be suitable for promoting trust.
One way to accommodate this finding is, we suggest, to focus on
trustworthiness instead of trust. While these two may come apart, we ideally
want both: a trustworthy system and the stakeholder's trust. In this paper, we
argue that even though trustworthiness does not automatically lead to trust,
there are several reasons to engineer primarily for trustworthiness -- and that
a system's explainability can crucially contribute to its trustworthiness.
Related papers
- A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Did You Mean...? Confidence-based Trade-offs in Semantic Parsing [52.28988386710333]
We show how a calibrated model can help balance common trade-offs in task-oriented parsing.
We then examine how confidence scores can help optimize the trade-off between usability and safety.
arXiv Detail & Related papers (2023-03-29T17:07:26Z) - Distrust in (X)AI -- Measurement Artifact or Distinct Construct? [0.0]
Trust is a key motivation in developing explainable artificial intelligence (XAI)
Distrust seems relatively understudied in XAI.
psychometric evidence favors a distinction between trust and distrust.
arXiv Detail & Related papers (2023-03-29T07:14:54Z) - Are we measuring trust correctly in explainability, interpretability,
and transparency research? [4.452019519213712]
This paper showcases three methods that do a good job at measuring perceived and demonstrated trust.
It is intended to be starting point for discussion on this topic, rather than to be the final say.
arXiv Detail & Related papers (2022-08-31T07:41:08Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Relativistic Conceptions of Trustworthiness: Implications for the
Trustworthy Status of National Identification Systems [1.4728207711693404]
This article outlines a new account of trustworthiness, dubbed the expectation-oriented account.
To be trustworthy, we suggest, is to minimize the error associated with trustor expectations in situations of social dependency.
In addition to outlining the features of the expectation-oriented account, we describe some of the implications of this account for the design, development, and management of trustworthy NISs.
arXiv Detail & Related papers (2021-12-17T18:40:44Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.