Improving the Interpretability of fMRI Decoding using Deep Neural
Networks and Adversarial Robustness
- URL: http://arxiv.org/abs/2004.11114v3
- Date: Thu, 17 Dec 2020 16:01:57 GMT
- Title: Improving the Interpretability of fMRI Decoding using Deep Neural
Networks and Adversarial Robustness
- Authors: Patrick McClure, Dustin Moraczewski, Ka Chun Lam, Adam Thomas,
Francisco Pereira
- Abstract summary: A saliency map is a common approach for producing interpretable visualizations of the relative importance of input features for a prediction.
In this paper, we review a variety of methods for producing gradient-based saliency maps, and present a new adversarial training method we developed to make DNNs robust to input noise.
- Score: 1.254120224317171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are being increasingly used to make predictions
from functional magnetic resonance imaging (fMRI) data. However, they are
widely seen as uninterpretable "black boxes", as it can be difficult to
discover what input information is used by the DNN in the process, something
important in both cognitive neuroscience and clinical applications. A saliency
map is a common approach for producing interpretable visualizations of the
relative importance of input features for a prediction. However, methods for
creating maps often fail due to DNNs being sensitive to input noise, or by
focusing too much on the input and too little on the model. It is also
challenging to evaluate how well saliency maps correspond to the truly relevant
input information, as ground truth is not always available. In this paper, we
review a variety of methods for producing gradient-based saliency maps, and
present a new adversarial training method we developed to make DNNs robust to
input noise, with the goal of improving interpretability. We introduce two
quantitative evaluation procedures for saliency map methods in fMRI, applicable
whenever a DNN or linear model is being trained to decode some information from
imaging data. We evaluate the procedures using a synthetic dataset where the
complex activation structure is known, and on saliency maps produced for DNN
and linear models for task decoding in the Human Connectome Project (HCP)
dataset. Our key finding is that saliency maps produced with different methods
vary widely in interpretability, in both in synthetic and HCP fMRI data.
Strikingly, even when DNN and linear models decode at comparable levels of
performance, DNN saliency maps score higher on interpretability than linear
model saliency maps (derived via weights or gradient). Finally, saliency maps
produced with our adversarial training method outperform those from other
methods.
Related papers
- SCAAT: Improving Neural Network Interpretability via Saliency
Constrained Adaptive Adversarial Training [10.716021768803433]
Saliency map is a common form of explanation illustrating the heatmap of feature attributions.
We propose a model-agnostic learning method called Saliency Constrained Adaptive Adversarial Training (SCAAT) to improve the quality of such DNN interpretability.
arXiv Detail & Related papers (2023-11-09T04:48:38Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - DynDepNet: Learning Time-Varying Dependency Structures from fMRI Data
via Dynamic Graph Structure Learning [58.94034282469377]
We propose DynDepNet, a novel method for learning the optimal time-varying dependency structure of fMRI data induced by downstream prediction tasks.
Experiments on real-world fMRI datasets, for the task of sex classification, demonstrate that DynDepNet achieves state-of-the-art results.
arXiv Detail & Related papers (2022-09-27T16:32:11Z) - Generalizing Neural Networks by Reflecting Deviating Data in Production [15.498447555957773]
We present a runtime approach that mitigates DNN mis-predictions caused by unexpected runtime inputs to the DNN.
We use a distribution analyzer based on the distance metric learned by a Siamese network to identify "unseen" semantically-preserving inputs.
Our approach transforms those unexpected inputs into inputs from the training set that are identified as having similar semantics.
arXiv Detail & Related papers (2021-10-06T13:05:45Z) - Topological Measurement of Deep Neural Networks Using Persistent
Homology [0.7919213739992464]
The inner representation of deep neural networks (DNNs) is indecipherable.
Persistent homology (PH) was employed for investigating the complexities of trained DNNs.
arXiv Detail & Related papers (2021-06-06T03:06:15Z) - AxonNet: A self-supervised Deep Neural Network for Intravoxel Structure
Estimation from DW-MRI [0.12183405753834559]
We show that neural networks (DNNs) have the potential to extract information from diffusion-weighted signals to reconstruct cerebral tracts.
We present two DNN models: one that estimates the axonal structure in the form of a voxel and the other to calculate the structure of the central voxel.
arXiv Detail & Related papers (2021-03-19T20:11:03Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - MFPP: Morphological Fragmental Perturbation Pyramid for Black-Box Model
Explanations [7.051974163915314]
We propose a Morphological Fragmental Perturbation Pyramid (P) method to solve the Explainable AI problem.
In the DNNP method, we divide the input image into multi-scale fragments and randomly mask out fragments as perturbation to generate a saliency map.
Compared with the existing input sampling perturbation method, the pyramid structure fragment has proved to be more effective.
arXiv Detail & Related papers (2020-06-04T06:13:40Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Architecture Disentanglement for Deep Neural Networks [174.16176919145377]
We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
arXiv Detail & Related papers (2020-03-30T08:34:33Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.