Whether to trust: the ML leap of faith
- URL: http://arxiv.org/abs/2408.00786v2
- Date: Thu, 23 Jan 2025 17:53:31 GMT
- Title: Whether to trust: the ML leap of faith
- Authors: Tory Frame, Julian Padget, George Stothart, Elizabeth Coulthard,
- Abstract summary: The Leap of Faith (LoF) is taken when a user decides to rely on machine learning (ML)
LoF matrix captures alignment between an ML model and a human expert's mental model.
For the first time, we propose trust metrics that evaluate whether users demonstrate trust through their actions rather than self-reported intent.
- Score: 0.0
- License:
- Abstract: Human trust is a prerequisite to trustworthy AI adoption, yet trust remains poorly understood. Trust is often described as an attitude, but attitudes cannot be reliably measured or managed. Additionally, humans frequently conflate trust in an AI system, its machine learning (ML) technology, and its other component parts. Without fully understanding the 'leap of faith' involved in trusting ML, users cannot develop intrinsic trust in these systems. A common approach to building trust is to explain a ML model's reasoning process. However, such explanations often fail to resonate with non-experts due to the inherent complexity of ML systems and explanations are disconnected from users' own (unarticulated) mental models. This work puts forward an innovative way of directly building intrinsic trust in ML, by discerning and measuring the Leap of Faith (LoF) taken when a user decides to rely on ML. The LoF matrix captures the alignment between an ML model and a human expert's mental model. This match is rigorously and practically identified by feeding the user's data and objective function into both an ML agent and an expert-validated rules-based agent: a verified point of reference that can be tested a priori against a user's own mental model. This represents a new class of neuro-symbolic architecture. The LoF matrix reveals to the user the distance that constitutes the leap of faith between the rules-based and ML agents. For the first time, we propose trust metrics that evaluate whether users demonstrate trust through their actions rather than self-reported intent and whether such trust is deserved based on outcomes. The significance of the contribution is that it enables empirical assessment and management of ML trust drivers, to support trustworthy ML adoption. The approach is illustrated through a long-term high-stakes field study: a 3-month pilot of a multi-agent sleep-improvement system.
Related papers
- On Verbalized Confidence Scores for LLMs [25.160810008907397]
Uncertainty quantification for large language models (LLMs) can establish more human trust into their responses.
This work focuses on asking the LLM itself to verbalize its uncertainty with a confidence score as part of its output tokens.
We assess the reliability of verbalized confidence scores with respect to different datasets, models, and prompt methods.
arXiv Detail & Related papers (2024-12-19T11:10:36Z) - Fostering Trust and Quantifying Value of AI and ML [0.0]
Much has been discussed about trusting AI and ML inferences, but little has been done to define what that means.
producing ever more trustworthy machine learning inferences is a path to increase the value of products.
arXiv Detail & Related papers (2024-07-08T13:25:28Z) - Evaluation of Predictive Reliability to Foster Trust in Artificial
Intelligence. A case study in Multiple Sclerosis [0.34473740271026115]
Spotting Machine Learning failures is of paramount importance when ML predictions are used to drive clinical decisions.
We propose a simple approach that can be used in the deployment phase of any ML model to suggest whether to trust predictions or not.
Our method holds the promise to provide effective support to clinicians by spotting potential ML failures during deployment.
arXiv Detail & Related papers (2024-02-27T14:48:07Z) - TrustScore: Reference-Free Evaluation of LLM Response Trustworthiness [58.721012475577716]
Large Language Models (LLMs) have demonstrated impressive capabilities across various domains, prompting a surge in their practical applications.
This paper introduces TrustScore, a framework based on the concept of Behavioral Consistency, which evaluates whether an LLMs response aligns with its intrinsic knowledge.
arXiv Detail & Related papers (2024-02-19T21:12:14Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Trust in AI: Interpretability is not necessary or sufficient, while
black-box interaction is necessary and sufficient [0.0]
The problem of human trust in artificial intelligence is one of the most fundamental problems in applied machine learning.
We draw from statistical learning theory and sociological lenses on human-automation trust to motivate an AI-as-tool framework.
We clarify the role of interpretability in trust with a ladder of model access.
arXiv Detail & Related papers (2022-02-10T19:59:23Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.