Visualizing Deep Neural Networks with Topographic Activation Maps
- URL: http://arxiv.org/abs/2204.03528v2
- Date: Wed, 14 Jun 2023 12:49:16 GMT
- Title: Visualizing Deep Neural Networks with Topographic Activation Maps
- Authors: Valerie Krug, Raihan Kabir Ratul, Christopher Olson, Sebastian Stober
- Abstract summary: We introduce and compare methods to obtain a topographic layout of neurons in a Deep Neural Network layer.
We demonstrate how to use topographic activation maps to identify errors or encoded biases and to visualize training processes.
- Score: 1.1470070927586014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning with Deep Neural Networks (DNNs) has become a successful
tool in solving tasks across various fields of application. However, the
complexity of DNNs makes it difficult to understand how they solve their
learned task. To improve the explainability of DNNs, we adapt methods from
neuroscience that analyze complex and opaque systems. Here, we draw inspiration
from how neuroscience uses topographic maps to visualize brain activity. To
also visualize activations of neurons in DNNs as topographic maps, we research
techniques to layout the neurons in a two-dimensional space such that neurons
of similar activity are in the vicinity of each other. In this work, we
introduce and compare methods to obtain a topographic layout of neurons in a
DNN layer. Moreover, we demonstrate how to use topographic activation maps to
identify errors or encoded biases and to visualize training processes. Our
novel visualization technique improves the transparency of DNN-based
decision-making systems and is interpretable without expert knowledge in
Machine Learning.
Related papers
- Graph Neural Networks for Brain Graph Learning: A Survey [53.74244221027981]
Graph neural networks (GNNs) have demonstrated a significant advantage in mining graph-structured data.
GNNs to learn brain graph representations for brain disorder analysis has recently gained increasing attention.
In this paper, we aim to bridge this gap by reviewing brain graph learning works that utilize GNNs.
arXiv Detail & Related papers (2024-06-01T02:47:39Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - Deep Reinforcement Learning Guided Graph Neural Networks for Brain
Network Analysis [61.53545734991802]
We propose a novel brain network representation framework, namely BN-GNN, which searches for the optimal GNN architecture for each brain network.
Our proposed BN-GNN improves the performance of traditional GNNs on different brain network analysis tasks.
arXiv Detail & Related papers (2022-03-18T07:05:27Z) - Explainability Tools Enabling Deep Learning in Future In-Situ Real-Time
Planetary Explorations [58.720142291102135]
Deep learning (DL) has proven to be an effective machine learning and computer vision technique.
Most of the Deep Neural Network (DNN) architectures are so complex that they are considered a 'black box'
In this paper, we used integrated gradients to describe the attributions of each neuron to the output classes.
It provides a set of explainability tools (ET) that opens the black box of a DNN so that the individual contribution of neurons to category classification can be ranked and visualized.
arXiv Detail & Related papers (2022-01-15T07:10:00Z) - Joint Embedding of Structural and Functional Brain Networks with Graph
Neural Networks for Mental Illness Diagnosis [17.48272758284748]
Graph Neural Networks (GNNs) have become a de facto model for analyzing graph-structured data.
We develop a novel multiview GNN for multimodal brain networks.
In particular, we regard each modality as a view for brain networks and employ contrastive learning for multimodal fusion.
arXiv Detail & Related papers (2021-07-07T13:49:57Z) - Towards interpreting computer vision based on transformation invariant
optimization [10.820985444099536]
In this work, visualized images that can activate the neural network to the target classes are generated by back-propagation method.
We show some cases that such method can help us to gain insight into neural networks.
arXiv Detail & Related papers (2021-06-18T08:04:10Z) - Graph Neural Networks in Network Neuroscience [1.6114012813668934]
graph neural network (GNN) provides a clever way of learning the deep graph structure.
GNN-based methods have been used in several applications related to brain graphs such as missing brain graph synthesis and disease classification.
We conclude by charting a path toward a better application of GNN models in network neuroscience field for neurological disorder diagnosis and population graph integration.
arXiv Detail & Related papers (2021-06-07T11:49:57Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.