Going Beyond Saliency Maps: Training Deep Models to Interpret Deep
Models
- URL: http://arxiv.org/abs/2102.08239v1
- Date: Tue, 16 Feb 2021 15:57:37 GMT
- Title: Going Beyond Saliency Maps: Training Deep Models to Interpret Deep
Models
- Authors: Zixuan Liu and Ehsan Adeli and Kilian M. Pohl and Qingyu Zhao
- Abstract summary: Interpretability is a critical factor in applying complex deep learning models to advance the understanding of brain disorders.
We propose to train simulator networks that can warp a given image to inject or remove patterns of the disease.
We apply our approach to interpreting classifiers trained on a synthetic dataset and two neuroimaging datasets to visualize the effect of the Alzheimer's disease and alcohol use disorder.
- Score: 16.218680291606628
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpretability is a critical factor in applying complex deep learning
models to advance the understanding of brain disorders in neuroimaging studies.
To interpret the decision process of a trained classifier, existing techniques
typically rely on saliency maps to quantify the voxel-wise or feature-level
importance for classification through partial derivatives. Despite providing
some level of localization, these maps are not human-understandable from the
neuroscience perspective as they do not inform the specific meaning of the
alteration linked to the brain disorder. Inspired by the image-to-image
translation scheme, we propose to train simulator networks that can warp a
given image to inject or remove patterns of the disease. These networks are
trained such that the classifier produces consistently increased or decreased
prediction logits for the simulated images. Moreover, we propose to couple all
the simulators into a unified model based on conditional convolution. We
applied our approach to interpreting classifiers trained on a synthetic dataset
and two neuroimaging datasets to visualize the effect of the Alzheimer's
disease and alcohol use disorder. Compared to the saliency maps generated by
baseline approaches, our simulations and visualizations based on the Jacobian
determinants of the warping field reveal meaningful and understandable patterns
related to the diseases.
Related papers
- Harnessing Intra-group Variations Via a Population-Level Context for Pathology Detection [17.87825422578005]
This study introduces the notion of a population-level context for pathology detection and employs a graph theoretic approach to model and incorporate it into the latent code of an autoencoder.
PopuSense seeks to capture additional intra-group variations inherent in biomedical data that a local or global context of the convolutional model might miss or smooth out.
arXiv Detail & Related papers (2024-03-04T18:44:30Z) - Deep Variational Lesion-Deficit Mapping [0.3914676152740142]
We introduce a comprehensive framework for lesion-deficit model comparison.
We show that our model outperforms established methods by a substantial margin across all simulation scenarios.
Our analysis justifies the widespread adoption of this approach.
arXiv Detail & Related papers (2023-05-27T13:49:35Z) - Brain Cortical Functional Gradients Predict Cortical Folding Patterns
via Attention Mesh Convolution [51.333918985340425]
We develop a novel attention mesh convolution model to predict cortical gyro-sulcal segmentation maps on individual brains.
Experiments show that the prediction performance via our model outperforms other state-of-the-art models.
arXiv Detail & Related papers (2022-05-21T14:08:53Z) - Feature visualization for convolutional neural network models trained on
neuroimaging data [0.0]
We show for the first time results using feature visualization of convolutional neural networks (CNNs)
We have trained CNNs for different tasks including sex classification and artificial lesion classification based on structural magnetic resonance imaging (MRI) data.
The resulting images reveal the learned concepts of the artificial lesions, including their shapes, but remain hard to interpret for abstract features in the sex classification task.
arXiv Detail & Related papers (2022-03-24T15:24:38Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis [53.65837038435433]
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
arXiv Detail & Related papers (2021-03-29T11:30:18Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Abstracting Deep Neural Networks into Concept Graphs for Concept Level
Interpretability [0.39635467316436124]
We attempt to understand the behavior of trained models that perform image processing tasks in the medical domain by building a graphical representation of the concepts they learn.
We show the application of our proposed implementation on two biomedical problems - brain tumor segmentation and fundus image classification.
arXiv Detail & Related papers (2020-08-14T16:34:32Z) - Interpretation of Brain Morphology in Association to Alzheimer's Disease
Dementia Classification Using Graph Convolutional Networks on Triangulated
Meshes [6.088308871328403]
We propose a mesh-based technique to aid in the classification of Alzheimer's disease dementia (ADD) using mesh representations of the cortex and subcortical structures.
We outperform other machine learning methods with a 96.35% testing accuracy for the ADD vs. healthy control problem.
arXiv Detail & Related papers (2020-08-14T01:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.