Improving Explainability of Image Classification in Scenarios with Class
Overlap: Application to COVID-19 and Pneumonia
- URL: http://arxiv.org/abs/2008.02866v3
- Date: Sun, 16 Aug 2020 01:33:25 GMT
- Title: Improving Explainability of Image Classification in Scenarios with Class
Overlap: Application to COVID-19 and Pneumonia
- Authors: Edward Verenich, Alvaro Velasquez, Nazar Khan and Faraz Hussain
- Abstract summary: Trust in predictions made by machine learning models is increased if the model generalizes well on previously unseen samples.
We propose a method that enhances the explainability of image classifications through better localization by mitigating the model uncertainty induced by class overlap.
Our method is particularly promising in real-world class overlap scenarios, such as COVID-19 and pneumonia, where expertly labeled data for localization is not readily available.
- Score: 7.372797734096181
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trust in predictions made by machine learning models is increased if the
model generalizes well on previously unseen samples and when inference is
accompanied by cogent explanations of the reasoning behind predictions. In the
image classification domain, generalization can be assessed through accuracy,
sensitivity, and specificity. Explainability can be assessed by how well the
model localizes the object of interest within an image. However, both
generalization and explainability through localization are degraded in
scenarios with significant overlap between classes. We propose a method based
on binary expert networks that enhances the explainability of image
classifications through better localization by mitigating the model uncertainty
induced by class overlap. Our technique performs discriminative localization on
images that contain features with significant class overlap, without explicitly
training for localization. Our method is particularly promising in real-world
class overlap scenarios, such as COVID-19 and pneumonia, where expertly labeled
data for localization is not readily available. This can be useful for early,
rapid, and trustworthy screening for COVID-19.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Attri-Net: A Globally and Locally Inherently Interpretable Model for Multi-Label Classification Using Class-Specific Counterfactuals [4.384272169863716]
Interpretability is crucial for machine learning algorithms in high-stakes medical applications.
Attri-Net is an inherently interpretable model for multi-label classification that provides local and global explanations.
arXiv Detail & Related papers (2024-06-08T13:52:02Z) - Counterfactual Image Generation for adversarially robust and
interpretable Classifiers [1.3859669037499769]
We propose a unified framework leveraging image-to-image translation Generative Adrial Networks (GANs) to produce counterfactual samples.
This is achieved by combining the classifier and discriminator into a single model that attributes real images to their respective classes and flags generated images as "fake"
We show how the model exhibits improved robustness to adversarial attacks, and we show how the discriminator's "fakeness" value serves as an uncertainty measure of the predictions.
arXiv Detail & Related papers (2023-10-01T18:50:29Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - Text-to-Image Diffusion Models are Zero-Shot Classifiers [8.26990105697146]
We investigate text-to-image diffusion models by proposing a method for evaluating them as zero-shot classifiers.
We apply our method to Stable Diffusion and Imagen, using it to probe fine-grained aspects of the models' knowledge.
They perform competitively with CLIP on a wide range of zero-shot image classification datasets.
arXiv Detail & Related papers (2023-03-27T14:15:17Z) - Inherently Interpretable Multi-Label Classification Using Class-Specific
Counterfactuals [9.485195366036292]
Interpretability is essential for machine learning algorithms in high-stakes application fields such as medical image analysis.
We propose Attri-Net, an inherently interpretable model for multi-label classification.
We find that Attri-Net produces high-quality multi-label explanations consistent with clinical knowledge.
arXiv Detail & Related papers (2023-03-01T13:32:55Z) - Learning disentangled representations for explainable chest X-ray
classification using Dirichlet VAEs [68.73427163074015]
This study explores the use of the Dirichlet Variational Autoencoder (DirVAE) for learning disentangled latent representations of chest X-ray (CXR) images.
The predictive capacity of multi-modal latent representations learned by DirVAE models is investigated through implementation of an auxiliary multi-label classification task.
arXiv Detail & Related papers (2023-02-06T18:10:08Z) - Sampling Based On Natural Image Statistics Improves Local Surrogate
Explainers [111.31448606885672]
Surrogate explainers are a popular post-hoc interpretability method to further understand how a model arrives at a prediction.
We propose two approaches to do so, namely (1) altering the method for sampling the local neighbourhood and (2) using perceptual metrics to convey some of the properties of the distribution of natural images.
arXiv Detail & Related papers (2022-08-08T08:10:13Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Explainable Deep Classification Models for Domain Generalization [94.43131722655617]
Explanations are defined as regions of visual evidence upon which a deep classification network makes a decision.
Our training strategy enforces a periodic saliency-based feedback to encourage the model to focus on the image regions that directly correspond to the ground-truth object.
arXiv Detail & Related papers (2020-03-13T22:22:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.