Counterfactual Explanations via Latent Space Projection and
Interpolation
- URL: http://arxiv.org/abs/2112.00890v1
- Date: Thu, 2 Dec 2021 00:07:49 GMT
- Title: Counterfactual Explanations via Latent Space Projection and
Interpolation
- Authors: Brian Barr (1), Matthew R. Harrington (2), Samuel Sharpe (1), C. Bayan
Bruss (1) ((1) Center for Machine Learning, Capital One, (2) Columbia
University)
- Abstract summary: We introduce SharpShooter, a method for binary classification that starts by creating a projected version of the input that classifies as the target class.
We then demonstrate that our framework translates core characteristics of a sample to its counterfactual through the use of learned representations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Counterfactual explanations represent the minimal change to a data sample
that alters its predicted classification, typically from an unfavorable initial
class to a desired target class. Counterfactuals help answer questions such as
"what needs to change for this application to get accepted for a loan?". A
number of recently proposed approaches to counterfactual generation give
varying definitions of "plausible" counterfactuals and methods to generate
them. However, many of these methods are computationally intensive and provide
unconvincing explanations. Here we introduce SharpShooter, a method for binary
classification that starts by creating a projected version of the input that
classifies as the target class. Counterfactual candidates are then generated in
latent space on the interpolation line between the input and its projection. We
then demonstrate that our framework translates core characteristics of a sample
to its counterfactual through the use of learned representations. Furthermore,
we show that SharpShooter is competitive across common quality metrics on
tabular and image datasets while being orders of magnitude faster than two
comparable methods and excels at measures of realism, making it well-suited for
high velocity machine learning applications which require timely explanations.
Related papers
- Generative Multi-modal Models are Good Class-Incremental Learners [51.5648732517187]
We propose a novel generative multi-modal model (GMM) framework for class-incremental learning.
Our approach directly generates labels for images using an adapted generative model.
Under the Few-shot CIL setting, we have improved by at least 14% accuracy over all the current state-of-the-art methods with significantly less forgetting.
arXiv Detail & Related papers (2024-03-27T09:21:07Z) - Rethinking Person Re-identification from a Projection-on-Prototypes
Perspective [84.24742313520811]
Person Re-IDentification (Re-ID) as a retrieval task, has achieved tremendous development over the past decade.
We propose a new baseline ProNet, which innovatively reserves the function of the classifier at the inference stage.
Experiments on four benchmarks demonstrate that our proposed ProNet is simple yet effective, and significantly beats previous baselines.
arXiv Detail & Related papers (2023-08-21T13:38:10Z) - Semi-supervised counterfactual explanations [3.6810543937967912]
We address the challenge of generating counterfactual explanations that lie in the same data distribution as that of the training data.
This requirement has been addressed through the incorporation of auto-encoder reconstruction loss in the counterfactual search process.
We show further improvement in the interpretability of counterfactual explanations when the auto-encoder is trained in a semi-supervised fashion with class tagged input data.
arXiv Detail & Related papers (2023-03-22T15:17:16Z) - VCNet: A self-explaining model for realistic counterfactual generation [52.77024349608834]
Counterfactual explanation is a class of methods to make local explanations of machine learning decisions.
We present VCNet-Variational Counter Net, a model architecture that combines a predictor and a counterfactual generator.
We show that VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem.
arXiv Detail & Related papers (2022-12-21T08:45:32Z) - An Upper Bound for the Distribution Overlap Index and Its Applications [18.481370450591317]
This paper proposes an easy-to-compute upper bound for the overlap index between two probability distributions.
The proposed bound shows its value in one-class classification and domain shift analysis.
Our work shows significant promise toward broadening the applications of overlap-based metrics.
arXiv Detail & Related papers (2022-12-16T20:02:03Z) - Explaining Image Classifiers Using Contrastive Counterfactuals in
Generative Latent Spaces [12.514483749037998]
We introduce a novel method to generate causal and yet interpretable counterfactual explanations for image classifiers.
We use this framework to obtain contrastive and causal sufficiency and necessity scores as global explanations for black-box classifiers.
arXiv Detail & Related papers (2022-06-10T17:54:46Z) - Scalable Optimal Classifiers for Adversarial Settings under Uncertainty [10.90668635921398]
We consider the problem of finding optimal classifiers in an adversarial setting where the class-1 data is generated by an attacker whose objective is not known to the defender.
We show that this low-dimensional characterization enables to develop a training method to compute provably approximately optimal classifiers in a scalable manner.
arXiv Detail & Related papers (2021-06-28T13:33:53Z) - Revisiting Deep Local Descriptor for Improved Few-Shot Classification [56.74552164206737]
We show how one can improve the quality of embeddings by leveraging textbfDense textbfClassification and textbfAttentive textbfPooling.
We suggest to pool feature maps by applying attentive pooling instead of the widely used global average pooling (GAP) to prepare embeddings for few-shot classification.
arXiv Detail & Related papers (2021-03-30T00:48:28Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - SCOUT: Self-aware Discriminant Counterfactual Explanations [78.79534272979305]
The problem of counterfactual visual explanations is considered.
A new family of discriminant explanations is introduced.
The resulting counterfactual explanations are optimization free and thus much faster than previous methods.
arXiv Detail & Related papers (2020-04-16T17:05:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.