DeDUCE: Generating Counterfactual Explanations Efficiently
- URL: http://arxiv.org/abs/2111.15639v1
- Date: Mon, 29 Nov 2021 17:47:21 GMT
- Title: DeDUCE: Generating Counterfactual Explanations Efficiently
- Authors: Benedikt H\"oltgen, Lisa Schut, Jan M. Brauner and Yarin Gal
- Abstract summary: We develop a new algorithm providing counterfactual explanations for large image classifiers trained with spectral normalisation at low computational cost.
We empirically compare this algorithm against baselines from the literature; our novel algorithm consistently finds counterfactuals that are much closer to the original inputs.
- Score: 26.300599540027893
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: When an image classifier outputs a wrong class label, it can be helpful to
see what changes in the image would lead to a correct classification. This is
the aim of algorithms generating counterfactual explanations. However, there is
no easily scalable method to generate such counterfactuals. We develop a new
algorithm providing counterfactual explanations for large image classifiers
trained with spectral normalisation at low computational cost. We empirically
compare this algorithm against baselines from the literature; our novel
algorithm consistently finds counterfactuals that are much closer to the
original inputs. At the same time, the realism of these counterfactuals is
comparable to the baselines. The code for all experiments is available at
https://github.com/benedikthoeltgen/DeDUCE.
Related papers
- Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
We propose an algorithm that enables fast and high-quality generation under arbitrary constraints.
During inference, we can interchange between gradient updates computed on the noisy image and updates computed on the final, clean image.
Our approach produces results that rival or surpass the state-of-the-art training-free inference approaches.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent [57.64826450787237]
We show how to analyze the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions.
We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm.
Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
arXiv Detail & Related papers (2024-07-19T08:29:12Z) - BEBLID: Boosted efficient binary local image descriptor [2.8538628855541397]
We introduce BEBLID, an efficient learned binary image descriptor.
It improves our previous real-valued descriptor, BELID, making it both more efficient for matching and more accurate.
In experiments BEBLID achieves an accuracy close to SIFT and better computational efficiency than ORB, the fastest algorithm in the literature.
arXiv Detail & Related papers (2024-02-07T00:14:32Z) - An Explainable Model-Agnostic Algorithm for CNN-based Biometrics
Verification [55.28171619580959]
This paper describes an adaptation of the Local Interpretable Model-Agnostic Explanations (LIME) AI method to operate under a biometric verification setting.
arXiv Detail & Related papers (2023-07-25T11:51:14Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - visClust: A visual clustering algorithm based on orthogonal projections [0.0]
visClust is a novel clustering algorithm based on lower dimensional data representations and visual interpretation.
The code is made available on GitHub and straightforward to use.
arXiv Detail & Related papers (2022-11-07T22:56:23Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Semi-supervised Sparse Representation with Graph Regularization for
Image Classification [1.370633147306388]
We propose a discriminative semi-supervised sparse representation algorithm for image classification.
The proposed algorithm achieves excellent performances compared with related popular methods.
arXiv Detail & Related papers (2020-11-11T09:16:48Z) - StreamSoNG: A Soft Streaming Classification Approach [7.70734146948411]
We propose a new streaming classification algorithm that uses Neural Gas prototypes as footprints.
The approach is tested on synthetic and real image datasets.
We compare our approach to three other streaming classifiers based on the Adaptive Random Forest, Very Fast Decision Rules, and the DenStream algorithm with excellent results.
arXiv Detail & Related papers (2020-10-01T18:22:04Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.