Global explainability in aligned image modalities
- URL: http://arxiv.org/abs/2112.09591v1
- Date: Fri, 17 Dec 2021 16:05:11 GMT
- Title: Global explainability in aligned image modalities
- Authors: Justin Engelmann, Amos Storkey, Miguel O. Bernabeu
- Abstract summary: We focus on image modalities that are naturally aligned such that each pixel position represents a similar relative position on the imaged object.
We propose the pixel-wise aggregation of image-wise explanations as a simple method to obtain label-wise and overall global explanations.
We then apply these methods to ultra-widefield retinal images, a naturally aligned modality.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) models are very effective on many computer vision problems
and increasingly used in critical applications. They are also inherently black
box. A number of methods exist to generate image-wise explanations that allow
practitioners to understand and verify model predictions for a given image.
Beyond that, it would be desirable to validate that a DL model
\textit{generally} works in a sensible way, i.e. consistent with domain
knowledge and not relying on undesirable data artefacts. For this purpose, the
model needs to be explained globally. In this work, we focus on image
modalities that are naturally aligned such that each pixel position represents
a similar relative position on the imaged object, as is common in medical
imaging. We propose the pixel-wise aggregation of image-wise explanations as a
simple method to obtain label-wise and overall global explanations. These can
then be used for model validation, knowledge discovery, and as an efficient way
to communicate qualitative conclusions drawn from inspecting image-wise
explanations. We further propose Progressive Erasing Plus Progressive
Restoration (PEPPR) as a method to quantitatively validate that these global
explanations are faithful to how the model makes its predictions. We then apply
these methods to ultra-widefield retinal images, a naturally aligned modality.
We find that the global explanations are consistent with domain knowledge and
faithfully reflect the model's workings.
Related papers
- Enhancing Counterfactual Image Generation Using Mahalanobis Distance with Distribution Preferences in Feature Space [7.00851481261778]
In the realm of Artificial Intelligence (AI), the importance of Explainable Artificial Intelligence (XAI) is increasingly recognized.
One notable single-instance XAI approach is counterfactual explanation, which aids users in comprehending a model's decisions.
This paper introduces a novel method for computing feature importance within the feature space of a black-box model.
arXiv Detail & Related papers (2024-05-31T08:26:53Z) - Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis [65.7968515029306]
We propose a novel Coarse-to-Fine Latent Diffusion (CFLD) method for Pose-Guided Person Image Synthesis (PGPIS)
A perception-refined decoder is designed to progressively refine a set of learnable queries and extract semantic understanding of person images as a coarse-grained prompt.
arXiv Detail & Related papers (2024-02-28T06:07:07Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Foiling Explanations in Deep Neural Networks [0.0]
This paper uncovers a troubling property of explanation methods for image-based DNNs.
We demonstrate how explanations may be arbitrarily manipulated through the use of evolution strategies.
Our novel algorithm is successfully able to manipulate an image in a manner imperceptible to the human eye.
arXiv Detail & Related papers (2022-11-27T15:29:39Z) - Combining Counterfactuals With Shapley Values To Explain Image Models [13.671174461441304]
We develop a pipeline to generate counterfactuals and estimate Shapley values.
We obtain contrastive and interpretable explanations with strong axiomatic guarantees.
arXiv Detail & Related papers (2022-06-14T18:23:58Z) - LIMEcraft: Handcrafted superpixel selection and inspection for Visual
eXplanations [3.0036519884678894]
LIMEcraft allows a user to interactively select semantically consistent areas and thoroughly examine the prediction for the image instance.
Our method improves model safety by inspecting model fairness for image pieces that may indicate model bias.
arXiv Detail & Related papers (2021-11-15T21:40:34Z) - Explainers in the Wild: Making Surrogate Explainers Robust to
Distortions through Perception [77.34726150561087]
We propose a methodology to evaluate the effect of distortions in explanations by embedding perceptual distances.
We generate explanations for images in the Imagenet-C dataset and demonstrate how using a perceptual distances in the surrogate explainer creates more coherent explanations for the distorted and reference images.
arXiv Detail & Related papers (2021-02-22T12:38:53Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human
Pose Estimation [80.02124918255059]
Semi-supervised learning aims to boost the accuracy of a model by exploring unlabeled images.
We learn two networks to mutually teach each other.
The more reliable predictions on easy images in each network are used to teach the other network to learn about the corresponding hard images.
arXiv Detail & Related papers (2020-11-25T03:29:52Z) - Explainable Deep Classification Models for Domain Generalization [94.43131722655617]
Explanations are defined as regions of visual evidence upon which a deep classification network makes a decision.
Our training strategy enforces a periodic saliency-based feedback to encourage the model to focus on the image regions that directly correspond to the ground-truth object.
arXiv Detail & Related papers (2020-03-13T22:22:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.