The Many Facets of Trust in AI: Formalizing the Relation Between Trust
and Fairness, Accountability, and Transparency
- URL: http://arxiv.org/abs/2208.00681v1
- Date: Mon, 1 Aug 2022 08:26:57 GMT
- Title: The Many Facets of Trust in AI: Formalizing the Relation Between Trust
and Fairness, Accountability, and Transparency
- Authors: Bran Knowles, John T. Richards, Frens Kroeger
- Abstract summary: Efforts to promote fairness, accountability, and transparency are assumed to be critical in fostering Trust in AI (TAI)
The lack of exposition on trust itself suggests that trust is commonly understood, uncomplicated, or even uninteresting.
Our analysis of TAI publications reveals numerous orientations which differ in terms of who is doing the trusting (agent), in what (object), on the basis of what (basis), in order to what (objective), and why (impact)
- Score: 4.003809001962519
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Efforts to promote fairness, accountability, and transparency are assumed to
be critical in fostering Trust in AI (TAI), but extant literature is
frustratingly vague regarding this 'trust'. The lack of exposition on trust
itself suggests that trust is commonly understood, uncomplicated, or even
uninteresting. But is it? Our analysis of TAI publications reveals numerous
orientations which differ in terms of who is doing the trusting (agent), in
what (object), on the basis of what (basis), in order to what (objective), and
why (impact). We develop an ontology that encapsulates these key axes of
difference to a) illuminate seeming inconsistencies across the literature and
b) more effectively manage a dizzying number of TAI considerations. We then
reflect this ontology through a corpus of publications exploring fairness,
accountability, and transparency to examine the variety of ways that TAI is
considered within and between these approaches to promoting trust.
Related papers
- TrustScore: Reference-Free Evaluation of LLM Response Trustworthiness [58.721012475577716]
Large Language Models (LLMs) have demonstrated impressive capabilities across various domains, prompting a surge in their practical applications.
This paper introduces TrustScore, a framework based on the concept of Behavioral Consistency, which evaluates whether an LLMs response aligns with its intrinsic knowledge.
arXiv Detail & Related papers (2024-02-19T21:12:14Z) - TrustLLM: Trustworthiness in Large Language Models [446.5640421311468]
This paper introduces TrustLLM, a comprehensive study of trustworthiness in large language models (LLMs)
We first propose a set of principles for trustworthy LLMs that span eight different dimensions.
Based on these principles, we establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics.
arXiv Detail & Related papers (2024-01-10T22:07:21Z) - A Systematic Review on Fostering Appropriate Trust in Human-AI
Interaction [19.137907393497848]
Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners.
Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communication.
This paper presents a systematic review to identify current practices in building appropriate trust, different ways to measure it, types of tasks used, and potential challenges associated with it.
arXiv Detail & Related papers (2023-11-08T12:19:58Z) - Trust and Transparency in Recommender Systems [0.0]
We first go through different understandings and measurements of trust in the AI and RS community, such as demonstrated and perceived trust.
We then review the relationsships between trust and transparency, as well as mental models, and investigate different strategies to achieve transparency in RS.
arXiv Detail & Related papers (2023-04-17T09:09:48Z) - Distrust in (X)AI -- Measurement Artifact or Distinct Construct? [0.0]
Trust is a key motivation in developing explainable artificial intelligence (XAI)
Distrust seems relatively understudied in XAI.
psychometric evidence favors a distinction between trust and distrust.
arXiv Detail & Related papers (2023-03-29T07:14:54Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Trust and Reliance in XAI -- Distinguishing Between Attitudinal and
Behavioral Measures [0.0]
Researchers argue that AI should be more transparent to increase trust, making transparency one of the main goals of XAI.
empirical research on this topic is inconclusive regarding the effect of transparency on trust.
We advocate for a clear distinction between behavioral (objective) measures of reliance and attitudinal (subjective) measures of trust.
arXiv Detail & Related papers (2022-03-23T10:39:39Z) - Relativistic Conceptions of Trustworthiness: Implications for the
Trustworthy Status of National Identification Systems [1.4728207711693404]
This article outlines a new account of trustworthiness, dubbed the expectation-oriented account.
To be trustworthy, we suggest, is to minimize the error associated with trustor expectations in situations of social dependency.
In addition to outlining the features of the expectation-oriented account, we describe some of the implications of this account for the design, development, and management of trustworthy NISs.
arXiv Detail & Related papers (2021-12-17T18:40:44Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.