Towards Modeling Uncertainties of Self-explaining Neural Networks via
Conformal Prediction
- URL: http://arxiv.org/abs/2401.01549v1
- Date: Wed, 3 Jan 2024 05:51:49 GMT
- Title: Towards Modeling Uncertainties of Self-explaining Neural Networks via
Conformal Prediction
- Authors: Wei Qian, Chenxu Zhao, Yangyi Li, Fenglong Ma, Chao Zhang, Mengdi Huai
- Abstract summary: We propose a novel uncertainty modeling framework for self-explaining neural networks.
We show it provides strong distribution-free uncertainty modeling performance for the generated explanations.
It also excels in producing efficient and effective prediction sets for the final predictions.
- Score: 34.87646720253128
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the recent progress in deep neural networks (DNNs), it remains
challenging to explain the predictions made by DNNs. Existing explanation
methods for DNNs mainly focus on post-hoc explanations where another
explanatory model is employed to provide explanations. The fact that post-hoc
methods can fail to reveal the actual original reasoning process of DNNs raises
the need to build DNNs with built-in interpretability. Motivated by this, many
self-explaining neural networks have been proposed to generate not only
accurate predictions but also clear and intuitive insights into why a
particular decision was made. However, existing self-explaining networks are
limited in providing distribution-free uncertainty quantification for the two
simultaneously generated prediction outcomes (i.e., a sample's final prediction
and its corresponding explanations for interpreting that prediction).
Importantly, they also fail to establish a connection between the confidence
values assigned to the generated explanations in the interpretation layer and
those allocated to the final predictions in the ultimate prediction layer. To
tackle the aforementioned challenges, in this paper, we design a novel
uncertainty modeling framework for self-explaining networks, which not only
demonstrates strong distribution-free uncertainty modeling performance for the
generated explanations in the interpretation layer but also excels in producing
efficient and effective prediction sets for the final predictions based on the
informative high-level basis explanations. We perform the theoretical analysis
for the proposed framework. Extensive experimental evaluation demonstrates the
effectiveness of the proposed uncertainty framework.
Related papers
- Towards Few-shot Self-explaining Graph Neural Networks [16.085176689122036]
We propose a novel framework that generates explanations to support predictions in few-shot settings.
MSE-GNN adopts a two-stage self-explaining structure, consisting of an explainer and a predictor.
We show that MSE-GNN can achieve superior performance on prediction tasks while generating high-quality explanations.
arXiv Detail & Related papers (2024-08-14T07:31:11Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows [67.23405590815602]
Graph Neural Networks (GNNs) are widely adopted in digital pathology.
Existing explainers discover an explanatory subgraph relevant to the prediction.
An explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions.
We propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs.
arXiv Detail & Related papers (2021-12-18T10:19:01Z) - Explaining Bayesian Neural Networks [11.296451806040796]
XAI aims to make advanced learning machines such as Deep Neural Networks (DNNs) more transparent in decision making.
BNNs so far have a limited form of transparency (model transparency) already built-in through their prior weight distribution.
In this work, we bring together these two perspectives of transparency into a holistic explanation framework for explaining BNNs.
arXiv Detail & Related papers (2021-08-23T18:09:41Z) - How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
Neural Networks [19.648814035399013]
Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks.
We propose a new framework that allows to convert any arbitrary explanation method for neural networks into an explanation method for Bayesian neural networks.
We demonstrate the effectiveness and usefulness of our approach extensively in various experiments.
arXiv Detail & Related papers (2020-06-16T08:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.