A Test Statistic Estimation-based Approach for Establishing
Self-interpretable CNN-based Binary Classifiers
- URL: http://arxiv.org/abs/2303.06876v3
- Date: Tue, 2 Jan 2024 21:47:08 GMT
- Title: A Test Statistic Estimation-based Approach for Establishing
Self-interpretable CNN-based Binary Classifiers
- Authors: Sourya Sengupta and Mark A. Anastasio
- Abstract summary: Post-hoc interpretability methods have the limitation that they can produce plausible but different interpretations.
The proposed method is self-interpretable, quantitative. Unlike the traditional post-hoc interpretability methods, the proposed method is self-interpretable, quantitative.
- Score: 7.424003880270276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interpretability is highly desired for deep neural network-based classifiers,
especially when addressing high-stake decisions in medical imaging. Commonly
used post-hoc interpretability methods have the limitation that they can
produce plausible but different interpretations of a given model, leading to
ambiguity about which one to choose. To address this problem, a novel
decision-theory-inspired approach is investigated to establish a
self-interpretable model, given a pre-trained deep binary black-box medical
image classifier. This approach involves utilizing a self-interpretable
encoder-decoder model in conjunction with a single-layer fully connected
network with unity weights. The model is trained to estimate the test statistic
of the given trained black-box deep binary classifier to maintain a similar
accuracy. The decoder output image, referred to as an equivalency map, is an
image that represents a transformed version of the to-be-classified image that,
when processed by the fixed fully connected layer, produces the same test
statistic value as the original classifier. The equivalency map provides a
visualization of the transformed image features that directly contribute to the
test statistic value and, moreover, permits quantification of their relative
contributions. Unlike the traditional post-hoc interpretability methods, the
proposed method is self-interpretable, quantitative. Detailed quantitative and
qualitative analyses have been performed with three different medical image
binary classification tasks.
Related papers
- A Bayesian Approach to Weakly-supervised Laparoscopic Image Segmentation [1.9639956888747314]
We study weakly-supervised laparoscopic image segmentation with sparse annotations.
We introduce a novel Bayesian deep learning approach designed to enhance both the accuracy and interpretability of the model's segmentation.
arXiv Detail & Related papers (2024-10-11T04:19:48Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - An Explainable Model-Agnostic Algorithm for CNN-based Biometrics
Verification [55.28171619580959]
This paper describes an adaptation of the Local Interpretable Model-Agnostic Explanations (LIME) AI method to operate under a biometric verification setting.
arXiv Detail & Related papers (2023-07-25T11:51:14Z) - Adversarial Sampling for Fairness Testing in Deep Neural Network [0.0]
adversarial sampling to test for fairness in prediction of deep neural network model across different classes of image in a given dataset.
We trained our neural network model on the original image, and without training our model on the perturbed or attacked image.
When we feed the adversarial samplings to our model, it was able to predict the original category/ class of the image the adversarial sample belongs to.
arXiv Detail & Related papers (2023-03-06T03:55:37Z) - Inherently Interpretable Multi-Label Classification Using Class-Specific
Counterfactuals [9.485195366036292]
Interpretability is essential for machine learning algorithms in high-stakes application fields such as medical image analysis.
We propose Attri-Net, an inherently interpretable model for multi-label classification.
We find that Attri-Net produces high-quality multi-label explanations consistent with clinical knowledge.
arXiv Detail & Related papers (2023-03-01T13:32:55Z) - Sampling Based On Natural Image Statistics Improves Local Surrogate
Explainers [111.31448606885672]
Surrogate explainers are a popular post-hoc interpretability method to further understand how a model arrives at a prediction.
We propose two approaches to do so, namely (1) altering the method for sampling the local neighbourhood and (2) using perceptual metrics to convey some of the properties of the distribution of natural images.
arXiv Detail & Related papers (2022-08-08T08:10:13Z) - Disentangled representations: towards interpretation of sex
determination from hip bone [1.0775419935941009]
saliency maps have become a popular method to make neural networks interpretable.
We propose a new paradigm for better interpretability.
We illustrate the relevance of this approach in the context of automatic sex determination from hip bones in forensic medicine.
arXiv Detail & Related papers (2021-12-17T10:07:05Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Uncertainty Quantification using Variational Inference for Biomedical Image Segmentation [0.0]
We use an encoder decoder architecture based on variational inference techniques for segmenting brain tumour images.
We evaluate our work on the publicly available BRATS dataset using Dice Similarity Coefficient (DSC) and Intersection Over Union (IOU) as the evaluation metrics.
arXiv Detail & Related papers (2020-08-12T20:08:04Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.