Explaining Predictions of Deep Neural Classifier via Activation Analysis
- URL: http://arxiv.org/abs/2012.02248v1
- Date: Thu, 3 Dec 2020 20:36:19 GMT
- Title: Explaining Predictions of Deep Neural Classifier via Activation Analysis
- Authors: Martin Stano, Wanda Benesova, Lukas Samuel Martak
- Abstract summary: We present a novel approach to explain and support an interpretation of the decision-making process to a human expert operating a deep learning system based on Convolutional Neural Network (CNN)
Our results indicate that our method is capable of detecting distinct prediction strategies that enable us to identify the most similar predictions from an existing atlas.
- Score: 0.11470070927586014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many practical applications, deep neural networks have been typically
deployed to operate as a black box predictor. Despite the high amount of work
on interpretability and high demand on the reliability of these systems, they
typically still have to include a human actor in the loop, to validate the
decisions and handle unpredictable failures and unexpected corner cases. This
is true in particular for failure-critical application domains, such as medical
diagnosis. We present a novel approach to explain and support an interpretation
of the decision-making process to a human expert operating a deep learning
system based on Convolutional Neural Network (CNN). By modeling activation
statistics on selected layers of a trained CNN via Gaussian Mixture Models
(GMM), we develop a novel perceptual code in binary vector space that describes
how the input sample is processed by the CNN. By measuring distances between
pairs of samples in this perceptual encoding space, for any new input sample,
we can now retrieve a set of most perceptually similar and dissimilar samples
from an existing atlas of labeled samples, to support and clarify the decision
made by the CNN model. Possible uses of this approach include for example
Computer-Aided Diagnosis (CAD) systems working with medical imaging data, such
as Magnetic Resonance Imaging (MRI) or Computed Tomography (CT) scans. We
demonstrate the viability of our method in the domain of medical imaging for
patient condition diagnosis, as the proposed decision explanation method via
similar ground truth domain examples (e.g. from existing diagnosis archives)
will be interpretable by the operating medical personnel. Our results indicate
that our method is capable of detecting distinct prediction strategies that
enable us to identify the most similar predictions from an existing atlas.
Related papers
- Analyzing the Effect of $k$-Space Features in MRI Classification Models [0.0]
We have developed an explainable AI methodology tailored for medical imaging.
We employ a Convolutional Neural Network (CNN) that analyzes MRI scans across both image and frequency domains.
This approach not only enhances early training efficiency but also deepens our understanding of how additional features impact the model predictions.
arXiv Detail & Related papers (2024-09-20T15:43:26Z) - Out-of-distribution Detection in Medical Image Analysis: A survey [12.778646136644399]
Computer-aided diagnostics has benefited from the development of deep learning-based computer vision techniques.
Traditional supervised deep learning methods assume that the test sample is drawn from the identical distribution as the training data.
It is possible to encounter out-of-distribution samples in real-world clinical scenarios, which may cause silent failure in deep learning-based medical image analysis tasks.
arXiv Detail & Related papers (2024-04-28T18:51:32Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Domain Shift in Computer Vision models for MRI data analysis: An
Overview [64.69150970967524]
Machine learning and computer vision methods are showing good performance in medical imagery analysis.
Yet only a few applications are now in clinical use.
Poor transferability of themodels to data from different sources or acquisition domains is one of the reasons for that.
arXiv Detail & Related papers (2020-10-14T16:34:21Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z) - Uncertainty Quantification using Variational Inference for Biomedical Image Segmentation [0.0]
We use an encoder decoder architecture based on variational inference techniques for segmenting brain tumour images.
We evaluate our work on the publicly available BRATS dataset using Dice Similarity Coefficient (DSC) and Intersection Over Union (IOU) as the evaluation metrics.
arXiv Detail & Related papers (2020-08-12T20:08:04Z) - An Investigation of Interpretability Techniques for Deep Learning in
Predictive Process Analytics [2.162419921663162]
This paper explores interpretability techniques for two of the most successful learning algorithms in medical decision-making literature: deep neural networks and random forests.
We learn models that try to predict the type of cancer of the patient, given their set of medical activity records.
We see certain distinct features used for predictions that provide useful insights about the type of cancer, along with features that do not generalize well.
arXiv Detail & Related papers (2020-02-21T09:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.