Leveraging Conditional Generative Models in a General Explanation
Framework of Classifier Decisions
- URL: http://arxiv.org/abs/2106.10947v1
- Date: Mon, 21 Jun 2021 09:41:54 GMT
- Title: Leveraging Conditional Generative Models in a General Explanation
Framework of Classifier Decisions
- Authors: Martin Charachon, Paul-Henry Courn\`ede, C\'eline Hudelot and Roberto
Ardon
- Abstract summary: We show that visual explanation can be produced as the difference between two generated images.
We present two different approximations and implementations of the general formulation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Providing a human-understandable explanation of classifiers' decisions has
become imperative to generate trust in their use for day-to-day tasks. Although
many works have addressed this problem by generating visual explanation maps,
they often provide noisy and inaccurate results forcing the use of heuristic
regularization unrelated to the classifier in question. In this paper, we
propose a new general perspective of the visual explanation problem overcoming
these limitations. We show that visual explanation can be produced as the
difference between two generated images obtained via two specific conditional
generative models. Both generative models are trained using the classifier to
explain and a database to enforce the following properties: (i) All images
generated by the first generator are classified similarly to the input image,
whereas the second generator's outputs are classified oppositely. (ii)
Generated images belong to the distribution of real images. (iii) The distances
between the input image and the corresponding generated images are minimal so
that the difference between the generated elements only reveals relevant
information for the studied classifier. Using symmetrical and cyclic
constraints, we present two different approximations and implementations of the
general formulation. Experimentally, we demonstrate significant improvements
w.r.t the state-of-the-art on three different public data sets. In particular,
the localization of regions influencing the classifier is consistent with human
annotations.
Related papers
- Contrastive Prompts Improve Disentanglement in Text-to-Image Diffusion
Models [68.47333676663312]
We show a simple modification of classifier-free guidance can help disentangle image factors in text-to-image models.
The key idea of our method, Contrastive Guidance, is to characterize an intended factor with two prompts that differ in minimal tokens.
We illustrate whose benefits in three scenarios: (1) to guide domain-specific diffusion models trained on an object class, (2) to gain continuous, rig-like controls for text-to-image generation, and (3) to improve the performance of zero-shot image editors.
arXiv Detail & Related papers (2024-02-21T03:01:17Z) - Causal Generative Explainers using Counterfactual Inference: A Case
Study on the Morpho-MNIST Dataset [5.458813674116228]
We present a generative counterfactual inference approach to study the influence of visual features as well as causal factors.
We employ visual explanation methods from OmnixAI open source toolkit to compare them with our proposed methods.
This finding suggests that our methods are well-suited for generating highly interpretable counterfactual explanations on causal datasets.
arXiv Detail & Related papers (2024-01-21T04:07:48Z) - Object-Centric Relational Representations for Image Generation [18.069747511100132]
This paper explores a novel method to condition image generation, based on object-centric relational representations.
We show that such architectural biases entail properties that facilitate the manipulation and conditioning of the generative process.
We also propose a novel benchmark for image generation consisting of a synthetic dataset of images paired with their relational representation.
arXiv Detail & Related papers (2023-03-26T11:17:17Z) - Diffusion Visual Counterfactual Explanations [51.077318228247925]
Visual Counterfactual Explanations (VCEs) are an important tool to understand the decisions of an image.
Current approaches for the generation of VCEs are restricted to adversarially robust models and often contain non-realistic artefacts.
In this paper, we overcome this by generating Visual Diffusion Counterfactual Explanations (DVCEs) for arbitrary ImageNet classifiers.
arXiv Detail & Related papers (2022-10-21T09:35:47Z) - Explaining Image Classifiers Using Contrastive Counterfactuals in
Generative Latent Spaces [12.514483749037998]
We introduce a novel method to generate causal and yet interpretable counterfactual explanations for image classifiers.
We use this framework to obtain contrastive and causal sufficiency and necessity scores as global explanations for black-box classifiers.
arXiv Detail & Related papers (2022-06-10T17:54:46Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - IMAGINE: Image Synthesis by Image-Guided Model Inversion [79.4691654458141]
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images.
We leverage the knowledge of image semantics from a pre-trained classifier to achieve plausible generations.
IMAGINE enables the synthesis procedure to simultaneously 1) enforce semantic specificity constraints during the synthesis, 2) produce realistic images without generator training, and 3) give users intuitive control over the generation process.
arXiv Detail & Related papers (2021-04-13T02:00:24Z) - Context-Aware Layout to Image Generation with Enhanced Object Appearance [123.62597976732948]
A layout to image (L2I) generation model aims to generate a complicated image containing multiple objects (things) against natural background (stuff)
Existing L2I models have made great progress, but object-to-object and object-to-stuff relations are often broken.
We argue that these are caused by the lack of context-aware object and stuff feature encoding in their generators, and location-sensitive appearance representation in their discriminators.
arXiv Detail & Related papers (2021-03-22T14:43:25Z) - Combining Similarity and Adversarial Learning to Generate Visual
Explanation: Application to Medical Image Classification [0.0]
We leverage a learning framework to produce our visual explanations method.
Using metrics from the literature, our method outperforms state-of-the-art approaches.
We validate our approach on a large chest X-ray database.
arXiv Detail & Related papers (2020-12-14T08:34:12Z) - Autoregressive Unsupervised Image Segmentation [8.894935073145252]
We propose a new unsupervised image segmentation approach based on mutual information between different views constructed of the inputs.
The proposed method outperforms current state-of-the-art on unsupervised image segmentation.
arXiv Detail & Related papers (2020-07-16T10:47:40Z) - OneGAN: Simultaneous Unsupervised Learning of Conditional Image
Generation, Foreground Segmentation, and Fine-Grained Clustering [100.32273175423146]
We present a method for simultaneously learning, in an unsupervised manner, a conditional image generator, foreground extraction and segmentation, and object removal and background completion.
The method combines a Geneversarative Adrial Network and a Variational Auto-Encoder, with multiple encoders, generators and discriminators, and benefits from solving all tasks at once.
arXiv Detail & Related papers (2019-12-31T18:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.