Counterfactual Explanation of Brain Activity Classifiers using
Image-to-Image Transfer by Generative Adversarial Network
- URL: http://arxiv.org/abs/2110.14927v1
- Date: Thu, 28 Oct 2021 07:21:12 GMT
- Title: Counterfactual Explanation of Brain Activity Classifiers using
Image-to-Image Transfer by Generative Adversarial Network
- Authors: Teppei Matsui, Masato Taki, Trung Quang Pham, Junichi Chikazoe, Koji
Jimura
- Abstract summary: Deep neural networks (DNNs) can accurately decode task-related information from brain activations.
One of the promising approaches for explaining such a black-box system is counterfactual explanation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep neural networks (DNNs) can accurately decode task-related information
from brain activations. However, because of the nonlinearity of the DNN, the
decisions made by DNNs are hardly interpretable. One of the promising
approaches for explaining such a black-box system is counterfactual
explanation. In this framework, the behavior of a black-box system is explained
by comparing real data and realistic synthetic data that are specifically
generated such that the black-box system outputs an unreal outcome. Here we
introduce a novel generative DNN (counterfactual activation generator, CAG)
that can provide counterfactual explanations for DNN-based classifiers of brain
activations. Importantly, CAG can simultaneously handle image transformation
among multiple classes associated with different behavioral tasks. Using CAG,
we demonstrated counterfactual explanation of DNN-based classifiers that
learned to discriminate brain activations of seven behavioral tasks.
Furthermore, by iterative applications of CAG, we were able to enhance and
extract subtle spatial brain activity patterns that affected the classifier's
decisions. Together, these results demonstrate that the counterfactual
explanation based on image-to-image transformation would be a promising
approach to understand and extend the current application of DNNs in fMRI
analyses.
Related papers
- Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - CI-GNN: A Granger Causality-Inspired Graph Neural Network for
Interpretable Brain Network-Based Psychiatric Diagnosis [40.26902764049346]
We propose a granger causality-inspired graph neural network (CI-GNN) to explain brain-network based psychiatric diagnosis.
CI-GNN learns disentangled subgraph-level representations alpha and beta that encode, respectively, the causal and noncausal aspects of original graph.
We empirically evaluate the performance of CI-GNN against three baseline GNNs and four state-of-the-art GNN explainers on synthetic data and three large-scale brain disease datasets.
arXiv Detail & Related papers (2023-01-04T14:36:44Z) - SimpleMind adds thinking to deep neural networks [3.888848425698769]
Deep neural networks (DNNs) detect patterns in data and have shown versatility and strong performance in many computer vision applications.
DNNs alone are susceptible to obvious mistakes that violate simple, common sense concepts and are limited in their ability to use explicit knowledge to guide their search and decision making.
This paper introduces SimpleMind, an open-source software framework for Cognitive AI focused on medical image understanding.
arXiv Detail & Related papers (2022-12-02T03:38:20Z) - Visualizing Deep Neural Networks with Topographic Activation Maps [1.1470070927586014]
We introduce and compare methods to obtain a topographic layout of neurons in a Deep Neural Network layer.
We demonstrate how to use topographic activation maps to identify errors or encoded biases and to visualize training processes.
arXiv Detail & Related papers (2022-04-07T15:56:44Z) - Explainability Tools Enabling Deep Learning in Future In-Situ Real-Time
Planetary Explorations [58.720142291102135]
Deep learning (DL) has proven to be an effective machine learning and computer vision technique.
Most of the Deep Neural Network (DNN) architectures are so complex that they are considered a 'black box'
In this paper, we used integrated gradients to describe the attributions of each neuron to the output classes.
It provides a set of explainability tools (ET) that opens the black box of a DNN so that the individual contribution of neurons to category classification can be ranked and visualized.
arXiv Detail & Related papers (2022-01-15T07:10:00Z) - Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks
in Perception Tasks [1.2246649738388387]
We present a simple, yet effective, approach to verify whether a trained convolutional neural network (CNN) respects specified symbolic background knowledge.
The knowledge may consist of any fuzzy predicate logic rules.
We show that this approach benefits from fuzziness and calibrating the concept outputs.
arXiv Detail & Related papers (2022-01-03T10:35:47Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Architecture Disentanglement for Deep Neural Networks [174.16176919145377]
We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
arXiv Detail & Related papers (2020-03-30T08:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.