Path Integrals for the Attribution of Model Uncertainties
- URL: http://arxiv.org/abs/2107.08756v2
- Date: Tue, 20 Jul 2021 14:32:43 GMT
- Title: Path Integrals for the Attribution of Model Uncertainties
- Authors: Iker Perez, Piotr Skalski, Alec Barns-Graham, Jason Wong, David Sutton
- Abstract summary: We present a novel algorithm that relies on in-distribution curves connecting a feature vector to some counterfactual counterpart.
We validate our approach on benchmark image data sets with varying resolution, and show that it significantly simplifies interpretability.
- Score: 0.18899300124593643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Enabling interpretations of model uncertainties is of key importance in
Bayesian machine learning applications. Often, this requires to meaningfully
attribute predictive uncertainties to source features in an image, text or
categorical array. However, popular attribution methods are particularly
designed for classification and regression scores. In order to explain
uncertainties, state of the art alternatives commonly procure counterfactual
feature vectors, and proceed by making direct comparisons. In this paper, we
leverage path integrals to attribute uncertainties in Bayesian differentiable
models. We present a novel algorithm that relies on in-distribution curves
connecting a feature vector to some counterfactual counterpart, and we retain
desirable properties of interpretability methods. We validate our approach on
benchmark image data sets with varying resolution, and show that it
significantly simplifies interpretability over the existing alternatives.
Related papers
- Learning Non-Linear Invariants for Unsupervised Out-of-Distribution Detection [5.019613806273252]
We propose a framework consisting of a normalizing flow-like architecture capable of learning non-linear invariants.
Our approach achieves state-of-the-art results on an extensive U-OOD benchmark.
arXiv Detail & Related papers (2024-07-04T16:01:21Z) - Context-aware feature attribution through argumentation [0.0]
We define a novel feature attribution framework called Context-Aware Feature Attribution Through Argumentation (CA-FATA)
Our framework harnesses the power of argumentation by treating each feature as an argument that can either support, attack or neutralize a prediction.
arXiv Detail & Related papers (2023-10-24T20:02:02Z) - Learning Disentangled Discrete Representations [22.5004558029479]
We show the relationship between discrete latent spaces and disentangled representations by replacing the standard Gaussian variational autoencoder with a tailored categorical variational autoencoder.
We provide both analytical and empirical findings that demonstrate the advantages of discrete VAEs for learning disentangled representations.
arXiv Detail & Related papers (2023-07-26T12:29:58Z) - Robust Outlier Rejection for 3D Registration with Variational Bayes [70.98659381852787]
We develop a novel variational non-local network-based outlier rejection framework for robust alignment.
We propose a voting-based inlier searching strategy to cluster the high-quality hypothetical inliers for transformation estimation.
arXiv Detail & Related papers (2023-04-04T03:48:56Z) - Rethinking interpretation: Input-agnostic saliency mapping of deep
visual classifiers [28.28834523468462]
Saliency methods provide post-hoc model interpretation by attributing input features to the model outputs.
We show that input-specific saliency mapping is intrinsically susceptible to misleading feature attribution.
We introduce a new perspective of input-agnostic saliency mapping that computationally estimates the high-level features attributed by the model to its outputs.
arXiv Detail & Related papers (2023-03-31T06:58:45Z) - Predicting Out-of-Domain Generalization with Neighborhood Invariance [59.05399533508682]
We propose a measure of a classifier's output invariance in a local transformation neighborhood.
Our measure is simple to calculate, does not depend on the test point's true label, and can be applied even in out-of-domain (OOD) settings.
In experiments on benchmarks in image classification, sentiment analysis, and natural language inference, we demonstrate a strong and robust correlation between our measure and actual OOD generalization.
arXiv Detail & Related papers (2022-07-05T14:55:16Z) - Invariant Causal Mechanisms through Distribution Matching [86.07327840293894]
In this work we provide a causal perspective and a new algorithm for learning invariant representations.
Empirically we show that this algorithm works well on a diverse set of tasks and in particular we observe state-of-the-art performance on domain generalization.
arXiv Detail & Related papers (2022-06-23T12:06:54Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Attentional Prototype Inference for Few-Shot Segmentation [128.45753577331422]
We propose attentional prototype inference (API), a probabilistic latent variable framework for few-shot segmentation.
We define a global latent variable to represent the prototype of each object category, which we model as a probabilistic distribution.
We conduct extensive experiments on four benchmarks, where our proposal obtains at least competitive and often better performance than state-of-the-art prototype-based methods.
arXiv Detail & Related papers (2021-05-14T06:58:44Z) - Uncertainty-Aware Few-Shot Image Classification [118.72423376789062]
Few-shot image classification learns to recognize new categories from limited labelled data.
We propose Uncertainty-Aware Few-Shot framework for image classification.
arXiv Detail & Related papers (2020-10-09T12:26:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.