QUCE: The Minimisation and Quantification of Path-Based Uncertainty for Generative Counterfactual Explanations
- URL: http://arxiv.org/abs/2402.17516v3
- Date: Mon, 29 Apr 2024 19:57:16 GMT
- Title: QUCE: The Minimisation and Quantification of Path-Based Uncertainty for Generative Counterfactual Explanations
- Authors: Jamie Duell, Monika Seisenberger, Hsuan Fu, Xiuyi Fan,
- Abstract summary: Quantified Uncertainty Counterfactual Explanations (QUCE) is a method designed to minimize path uncertainty.
We show that QUCE quantifies uncertainty when presenting explanations and generates more certain counterfactual examples.
We showcase the performance of the QUCE method by comparing it with competing methods for both path-based explanations and generative counterfactual examples.
- Score: 1.649938899766112
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks (DNNs) stand out as one of the most prominent approaches within the Machine Learning (ML) domain. The efficacy of DNNs has surged alongside recent increases in computational capacity, allowing these approaches to scale to significant complexities for addressing predictive challenges in big data. However, as the complexity of DNN models rises, interpretability diminishes. In response to this challenge, explainable models such as Adversarial Gradient Integration (AGI) leverage path-based gradients provided by DNNs to elucidate their decisions. Yet the performance of path-based explainers can be compromised when gradients exhibit irregularities during out-of-distribution path traversal. In this context, we introduce Quantified Uncertainty Counterfactual Explanations (QUCE), a method designed to mitigate out-of-distribution traversal by minimizing path uncertainty. QUCE not only quantifies uncertainty when presenting explanations but also generates more certain counterfactual examples. We showcase the performance of the QUCE method by comparing it with competing methods for both path-based explanations and generative counterfactual examples.
Related papers
- Unveiling the Statistical Foundations of Chain-of-Thought Prompting Methods [59.779795063072655]
Chain-of-Thought (CoT) prompting and its variants have gained popularity as effective methods for solving multi-step reasoning problems.
We analyze CoT prompting from a statistical estimation perspective, providing a comprehensive characterization of its sample complexity.
arXiv Detail & Related papers (2024-08-25T04:07:18Z) - Probabilistically Rewired Message-Passing Neural Networks [41.554499944141654]
Message-passing graph neural networks (MPNNs) emerged as powerful tools for processing graph-structured input.
MPNNs operate on a fixed input graph structure, ignoring potential noise and missing information.
We devise probabilistically rewired MPNNs (PR-MPNNs) which learn to add relevant edges while omitting less beneficial ones.
arXiv Detail & Related papers (2023-10-03T15:43:59Z) - INDigo: An INN-Guided Probabilistic Diffusion Algorithm for Inverse
Problems [31.693710075183844]
We propose a method that combines invertible neural networks (INN) and diffusion models for general inverse problems.
Specifically, we train the forward process of INN to simulate an arbitrary degradation process and use the inverse as a reconstruction process.
Our algorithm effectively estimates the details lost in the degradation process and is no longer limited by the requirement of knowing the closed-form expression of the degradation model.
arXiv Detail & Related papers (2023-06-05T15:14:47Z) - Learning Discretized Neural Networks under Ricci Flow [51.36292559262042]
We study Discretized Neural Networks (DNNs) composed of low-precision weights and activations.
DNNs suffer from either infinite or zero gradients due to the non-differentiable discrete function during training.
arXiv Detail & Related papers (2023-02-07T10:51:53Z) - Towards Formal Approximated Minimal Explanations of Neural Networks [0.0]
Deep neural networks (DNNs) are now being used in numerous domains.
DNNs are "black-boxes", and cannot be interpreted by humans.
We propose an efficient, verification-based method for finding minimal explanations.
arXiv Detail & Related papers (2022-10-25T11:06:37Z) - Tackling covariate shift with node-based Bayesian neural networks [26.64657196802115]
Node-based BNNs induce uncertainty by multiplying each hidden node with latent random variables, while learning a point-estimate of the weights.
In this paper, we interpret these latent noise variables as implicit representations of simple and domain-agnostic data perturbations during training.
arXiv Detail & Related papers (2022-06-06T08:56:19Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - A new interpretable unsupervised anomaly detection method based on
residual explanation [47.187609203210705]
We present RXP, a new interpretability method to deal with the limitations for AE-based AD in large-scale systems.
It stands out for its implementation simplicity, low computational cost and deterministic behavior.
In an experiment using data from a real heavy-haul railway line, the proposed method achieved superior performance compared to SHAP.
arXiv Detail & Related papers (2021-03-14T15:35:45Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.