Contrastive Counterfactual Visual Explanations With Overdetermination
- URL: http://arxiv.org/abs/2106.14556v1
- Date: Mon, 28 Jun 2021 10:24:17 GMT
- Title: Contrastive Counterfactual Visual Explanations With Overdetermination
- Authors: Adam White, Kwun Ho Ngan, James Phelan, Saman Sadeghi Afgeh, Kevin
Ryan, Constantino Carlos Reyes-Aldasoro, Artur d'Avila Garcez
- Abstract summary: CLEAR Image is based on the view that a satisfactory explanation should be contrastive, counterfactual and measurable.
CLEAR Image was successfully applied to a medical imaging case study where it outperformed methods such as Grad-CAM and LIME by an average of 27%.
- Score: 7.8752926274677435
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A novel explainable AI method called CLEAR Image is introduced in this paper.
CLEAR Image is based on the view that a satisfactory explanation should be
contrastive, counterfactual and measurable. CLEAR Image explains an image's
classification probability by contrasting the image with a corresponding image
generated automatically via adversarial learning. This enables both salient
segmentation and perturbations that faithfully determine each segment's
importance. CLEAR Image was successfully applied to a medical imaging case
study where it outperformed methods such as Grad-CAM and LIME by an average of
27% using a novel pointing game metric. CLEAR Image excels in identifying cases
of "causal overdetermination" where there are multiple patches in an image, any
one of which is sufficient by itself to cause the classification probability to
be close to one.
Related papers
- Introspective Deep Metric Learning [91.47907685364036]
We propose an introspective deep metric learning framework for uncertainty-aware comparisons of images.
The proposed IDML framework improves the performance of deep metric learning through uncertainty modeling.
arXiv Detail & Related papers (2023-09-11T16:21:13Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Introspective Deep Metric Learning for Image Retrieval [80.29866561553483]
We argue that a good similarity model should consider the semantic discrepancies with caution to better deal with ambiguous images for more robust training.
We propose to represent an image using not only a semantic embedding but also an accompanying uncertainty embedding, which describes the semantic characteristics and ambiguity of an image, respectively.
The proposed IDML framework improves the performance of deep metric learning through uncertainty modeling and attains state-of-the-art results on the widely used CUB-200-2011, Cars196, and Stanford Online Products datasets.
arXiv Detail & Related papers (2022-05-09T17:51:44Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - Amicable Aid: Perturbing Images to Improve Classification Performance [20.9291591835171]
adversarial perturbation of images to attack deep image classification models pose serious security concerns in practice.
We show that by taking the opposite search direction of perturbation, an image can be modified to yield higher classification confidence.
We investigate the universal amicable aid, i.e., a fixed perturbation can be applied to multiple images to improve their classification results.
arXiv Detail & Related papers (2021-12-09T06:16:08Z) - Weakly-supervised Generative Adversarial Networks for medical image
classification [1.479639149658596]
We propose a novel medical image classification algorithm called Weakly-Supervised Generative Adversarial Networks (WSGAN)
WSGAN only uses a small number of real images without labels to generate fake images or mask images to enlarge the sample size of the training set.
We show that WSGAN can obtain relatively high learning performance by using few labeled and unlabeled data.
arXiv Detail & Related papers (2021-11-29T15:38:48Z) - Virus-MNIST: Machine Learning Baseline Calculations for Image
Classification [0.0]
The Virus-MNIST data set is a collection of thumbnail images that is similar in style to the ubiquitous MNIST hand-written digits.
It is poised to take on a role in benchmarking progress of virus model training.
arXiv Detail & Related papers (2021-11-03T17:44:23Z) - Learning Conditional Knowledge Distillation for Degraded-Reference Image
Quality Assessment [157.1292674649519]
We propose a practical solution named degraded-reference IQA (DR-IQA)
DR-IQA exploits the inputs of IR models, degraded images, as references.
Our results can even be close to the performance of full-reference settings.
arXiv Detail & Related papers (2021-08-18T02:35:08Z) - Categorical Relation-Preserving Contrastive Knowledge Distillation for
Medical Image Classification [75.27973258196934]
We propose a novel Categorical Relation-preserving Contrastive Knowledge Distillation (CRCKD) algorithm, which takes the commonly used mean-teacher model as the supervisor.
With this regularization, the feature distribution of the student model shows higher intra-class similarity and inter-class variance.
With the contribution of the CCD and CRP, our CRCKD algorithm can distill the relational knowledge more comprehensively.
arXiv Detail & Related papers (2021-07-07T13:56:38Z) - Contrastive Semi-Supervised Learning for 2D Medical Image Segmentation [16.517086214275654]
We present a novel semi-supervised 2D medical segmentation solution that applies Contrastive Learning (CL) on image patches, instead of full images.
These patches are meaningfully constructed using the semantic information of different classes obtained via pseudo labeling.
We also propose a novel consistency regularization scheme, which works in synergy with contrastive learning.
arXiv Detail & Related papers (2021-06-12T15:43:24Z) - Grounded and Controllable Image Completion by Incorporating Lexical
Semantics [111.47374576372813]
Lexical Semantic Image Completion (LSIC) may have potential applications in art, design, and heritage conservation.
We advocate generating results faithful to both visual and lexical semantic context.
One major challenge for LSIC comes from modeling and aligning the structure of visual-semantic context.
arXiv Detail & Related papers (2020-02-29T16:54:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.