How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks
- URL: http://arxiv.org/abs/2009.05835v3
- Date: Sat, 3 Apr 2021 15:08:50 GMT
- Title: How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks
- Authors: Alexander Wong, Xiao Yu Wang, and Andrew Hryniowski
- Abstract summary: We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
- Score: 94.65749466106664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A critical step to building trustworthy deep neural networks is trust
quantification, where we ask the question: How much can we trust a deep neural
network? In this study, we take a step towards simple, interpretable metrics
for trust quantification by introducing a suite of metrics for assessing the
overall trustworthiness of deep neural networks based on their behaviour when
answering a set of questions. We conduct a thought experiment and explore two
key questions about trust in relation to confidence: 1) How much trust do we
have in actors who give wrong answers with great confidence? and 2) How much
trust do we have in actors who give right answers hesitantly? Based on insights
gained, we introduce the concept of question-answer trust to quantify
trustworthiness of an individual answer based on confident behaviour under
correct and incorrect answer scenarios, and the concept of trust density to
characterize the distribution of overall trust for an individual answer
scenario. We further introduce the concept of trust spectrum for representing
overall trust with respect to the spectrum of possible answer scenarios across
correctly and incorrectly answered questions. Finally, we introduce
NetTrustScore, a scalar metric summarizing overall trustworthiness. The suite
of metrics aligns with past social psychology studies that study the
relationship between trust and confidence. Leveraging these metrics, we
quantify the trustworthiness of several well-known deep neural network
architectures for image recognition to get a deeper understanding of where
trust breaks down. The proposed metrics are by no means perfect, but the hope
is to push the conversation towards better metrics to help guide practitioners
and regulators in producing, deploying, and certifying deep learning solutions
that can be trusted to operate in real-world, mission-critical scenarios.
Related papers
- Fostering Trust and Quantifying Value of AI and ML [0.0]
Much has been discussed about trusting AI and ML inferences, but little has been done to define what that means.
producing ever more trustworthy machine learning inferences is a path to increase the value of products.
arXiv Detail & Related papers (2024-07-08T13:25:28Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Distrust in (X)AI -- Measurement Artifact or Distinct Construct? [0.0]
Trust is a key motivation in developing explainable artificial intelligence (XAI)
Distrust seems relatively understudied in XAI.
psychometric evidence favors a distinction between trust and distrust.
arXiv Detail & Related papers (2023-03-29T07:14:54Z) - KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph
Neural Networks [63.531790269009704]
Social Internet of Things (SIoT) is a promising and emerging paradigm that injects the notion of social networking into smart objects (i.e., things)
Due to the risks and uncertainty, a crucial and urgent problem to be settled is establishing reliable relationships within SIoT, that is, trust evaluation.
We propose a novel knowledge-enhanced graph neural network (KGTrust) for better trust evaluation in SIoT.
arXiv Detail & Related papers (2023-02-22T14:24:45Z) - TrustGNN: Graph Neural Network based Trust Evaluation via Learnable
Propagative and Composable Nature [63.78619502896071]
Trust evaluation is critical for many applications such as cyber security, social communication and recommender systems.
We propose a new GNN based trust evaluation method named TrustGNN, which integrates smartly the propagative and composable nature of trust graphs.
Specifically, TrustGNN designs specific propagative patterns for different propagative processes of trust, and distinguishes the contribution of different propagative processes to create new trust.
arXiv Detail & Related papers (2022-05-25T13:57:03Z) - On the Importance of Trust in Next-Generation Networked CPS Systems: An
AI Perspective [2.1055643409860734]
We propose trust as a measure to evaluate the status of network agents and improve the decision-making process.
Trust relations are based on evidence created by the interactions of entities within a protocol.
We show how utilizing the trust evidence can improve the performance and the security of Federated Learning.
arXiv Detail & Related papers (2021-04-16T02:12:13Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - Towards Time-Aware Context-Aware Deep Trust Prediction in Online Social
Networks [0.4061135251278187]
Trust can be defined as a measure to determine which source of information is reliable and with whom we should share or from whom we should accept information.
There are several applications for trust in Online Social Networks (OSNs), including social spammer detection, fake news detection, retweet behaviour detection and recommender systems.
Trust prediction is the process of predicting a new trust relation between two users who are not currently connected.
arXiv Detail & Related papers (2020-03-21T01:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.