Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities
- URL: http://arxiv.org/abs/2009.14701v1
- Date: Wed, 30 Sep 2020 14:33:43 GMT
- Title: Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities
- Authors: Andrew Hryniowski, Xiao Yu Wang, and Alexander Wong
- Abstract summary: We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
- Score: 94.65749466106664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advances and successes in deep learning in recent years have led to
considerable efforts and investments into its widespread ubiquitous adoption
for a wide variety of applications, ranging from personal assistants and
intelligent navigation to search and product recommendation in e-commerce. With
this tremendous rise in deep learning adoption comes questions about the
trustworthiness of the deep neural networks that power these applications.
Motivated to answer such questions, there has been a very recent interest in
trust quantification. In this work, we introduce the concept of trust matrix, a
novel trust quantification strategy that leverages the recently introduced
question-answer trust metric by Wong et al. to provide deeper, more detailed
insights into where trust breaks down for a given deep neural network given a
set of questions. More specifically, a trust matrix defines the expected
question-answer trust for a given actor-oracle answer scenario, allowing one to
quickly spot areas of low trust that needs to be addressed to improve the
trustworthiness of a deep neural network. The proposed trust matrix is simple
to calculate, humanly interpretable, and to the best of the authors' knowledge
is the first to study trust at the actor-oracle answer level. We further extend
the concept of trust densities with the notion of conditional trust densities.
We experimentally leverage trust matrices to study several well-known deep
neural network architectures for image recognition, and further study the trust
density and conditional trust densities for an interesting actor-oracle answer
scenario. The results illustrate that trust matrices, along with conditional
trust densities, can be useful tools in addition to the existing suite of trust
quantification metrics for guiding practitioners and regulators in creating and
certifying deep learning solutions for trusted operation.
Related papers
- Fostering Trust and Quantifying Value of AI and ML [0.0]
Much has been discussed about trusting AI and ML inferences, but little has been done to define what that means.
producing ever more trustworthy machine learning inferences is a path to increase the value of products.
arXiv Detail & Related papers (2024-07-08T13:25:28Z) - TrustGuard: GNN-based Robust and Explainable Trust Evaluation with
Dynamicity Support [59.41529066449414]
We propose TrustGuard, a GNN-based accurate trust evaluation model that supports trust dynamicity.
TrustGuard is designed with a layered architecture that contains a snapshot input layer, a spatial aggregation layer, a temporal aggregation layer, and a prediction layer.
Experiments show that TrustGuard outperforms state-of-the-art GNN-based trust evaluation models with respect to trust prediction across single-timeslot and multi-timeslot.
arXiv Detail & Related papers (2023-06-23T07:39:12Z) - Distrust in (X)AI -- Measurement Artifact or Distinct Construct? [0.0]
Trust is a key motivation in developing explainable artificial intelligence (XAI)
Distrust seems relatively understudied in XAI.
psychometric evidence favors a distinction between trust and distrust.
arXiv Detail & Related papers (2023-03-29T07:14:54Z) - KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph
Neural Networks [63.531790269009704]
Social Internet of Things (SIoT) is a promising and emerging paradigm that injects the notion of social networking into smart objects (i.e., things)
Due to the risks and uncertainty, a crucial and urgent problem to be settled is establishing reliable relationships within SIoT, that is, trust evaluation.
We propose a novel knowledge-enhanced graph neural network (KGTrust) for better trust evaluation in SIoT.
arXiv Detail & Related papers (2023-02-22T14:24:45Z) - TrustGNN: Graph Neural Network based Trust Evaluation via Learnable
Propagative and Composable Nature [63.78619502896071]
Trust evaluation is critical for many applications such as cyber security, social communication and recommender systems.
We propose a new GNN based trust evaluation method named TrustGNN, which integrates smartly the propagative and composable nature of trust graphs.
Specifically, TrustGNN designs specific propagative patterns for different propagative processes of trust, and distinguishes the contribution of different propagative processes to create new trust.
arXiv Detail & Related papers (2022-05-25T13:57:03Z) - On the Importance of Trust in Next-Generation Networked CPS Systems: An
AI Perspective [2.1055643409860734]
We propose trust as a measure to evaluate the status of network agents and improve the decision-making process.
Trust relations are based on evidence created by the interactions of entities within a protocol.
We show how utilizing the trust evidence can improve the performance and the security of Federated Learning.
arXiv Detail & Related papers (2021-04-16T02:12:13Z) - Insights into Fairness through Trust: Multi-scale Trust Quantification
for Financial Deep Learning [94.65749466106664]
A fundamental aspect of fairness that has not been explored in financial deep learning is the concept of trust.
We conduct multi-scale trust quantification on a deep neural network for the purpose of credit card default prediction.
arXiv Detail & Related papers (2020-11-03T19:05:07Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.