Brain Decodes Deep Nets
- URL: http://arxiv.org/abs/2312.01280v2
- Date: Sat, 30 Mar 2024 03:30:18 GMT
- Title: Brain Decodes Deep Nets
- Authors: Huzheng Yang, James Gee, Jianbo Shi,
- Abstract summary: We developed a tool for visualizing and analyzing large pre-trained vision models by mapping them onto the brain.
Our innovation arises from a surprising usage of brain encoding: predicting brain fMRI measurements in response to images.
- Score: 9.302098067235507
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We developed a tool for visualizing and analyzing large pre-trained vision models by mapping them onto the brain, thus exposing their hidden inside. Our innovation arises from a surprising usage of brain encoding: predicting brain fMRI measurements in response to images. We report two findings. First, explicit mapping between the brain and deep-network features across dimensions of space, layers, scales, and channels is crucial. This mapping method, FactorTopy, is plug-and-play for any deep-network; with it, one can paint a picture of the network onto the brain (literally!). Second, our visualization shows how different training methods matter: they lead to remarkable differences in hierarchical organization and scaling behavior, growing with more data or network capacity. It also provides insight into fine-tuning: how pre-trained models change when adapting to small datasets. We found brain-like hierarchically organized network suffer less from catastrophic forgetting after fine-tuned.
Related papers
- Saliency Suppressed, Semantics Surfaced: Visual Transformations in Neural Networks and the Brain [0.0]
We take inspiration from neuroscience to shed light on how neural networks encode information at low (visual saliency) and high (semantic similarity) levels of abstraction.
We find that ResNets are more sensitive to saliency information than ViTs, when trained with object classification objectives.
We show that semantic encoding is a key factor in aligning AI with human visual perception, while saliency suppression is a non-brain-like strategy.
arXiv Detail & Related papers (2024-04-29T15:05:42Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Connecting metrics for shape-texture knowledge in computer vision [1.7785095623975342]
Deep neural networks remain brittle and susceptible to many changes in the image that do not cause humans to misclassify images.
Part of this different behavior may be explained by the type of features humans and deep neural networks use in vision tasks.
arXiv Detail & Related papers (2023-01-25T14:37:42Z) - Net2Brain: A Toolbox to compare artificial vision models with human
brain responses [11.794563225903813]
We introduce Net2Brain, a graphical and command-line user interface toolbox.
It compares the representational spaces of artificial deep neural networks (DNNs) and human brain recordings.
We demonstrate the functionality and advantages of Net2Brain with an example showcasing how it can be used to test hypotheses of cognitive computational neuroscience.
arXiv Detail & Related papers (2022-08-20T13:10:28Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - Deep Reinforcement Learning Guided Graph Neural Networks for Brain
Network Analysis [61.53545734991802]
We propose a novel brain network representation framework, namely BN-GNN, which searches for the optimal GNN architecture for each brain network.
Our proposed BN-GNN improves the performance of traditional GNNs on different brain network analysis tasks.
arXiv Detail & Related papers (2022-03-18T07:05:27Z) - Convolutional Neural Networks for cytoarchitectonic brain mapping at
large scale [0.33727511459109777]
We present a new workflow for mapping cytoarchitectonic areas in large series of cell-body stained histological sections of human postmortem brains.
It is based on a Deep Convolutional Neural Network (CNN), which is trained on a pair of section images with annotations, with a large number of un-annotated sections in between.
The new workflow does not require preceding 3D-reconstruction of sections, and is robust against histological artefacts.
arXiv Detail & Related papers (2020-11-25T16:25:13Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Deep Representation Learning For Multimodal Brain Networks [9.567489601729328]
We propose a novel end-to-end deep graph representation learning (Deep Multimodal Brain Networks - DMBN) to fuse multimodal brain networks.
The higher-order network mappings from brain structural networks to functional networks are learned in the node domain.
The experimental results show the superiority of the proposed method over some other state-of-the-art deep brain network models.
arXiv Detail & Related papers (2020-07-19T20:32:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.