Retinotopy Inspired Brain Encoding Model and the All-for-One Training
Recipe
- URL: http://arxiv.org/abs/2307.14021v1
- Date: Wed, 26 Jul 2023 08:06:40 GMT
- Title: Retinotopy Inspired Brain Encoding Model and the All-for-One Training
Recipe
- Authors: Huzheng Yang, Jianbo Shi, James Gee
- Abstract summary: We pre-trained a brain encoding model using over one million data points from five public datasets spanning three imaging modalities.
We demonstrate the effectiveness of the pre-trained model as a drop-in replacement for commonly used vision backbone models.
- Score: 14.943061215875655
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Brain encoding models aim to predict brain voxel-wise responses to stimuli
images, replicating brain signals captured by neuroimaging techniques. There is
a large volume of publicly available data, but training a comprehensive brain
encoding model is challenging. The main difficulties stem from a) diversity
within individual brain, with functional heterogeneous brain regions; b)
diversity of brains from different subjects, due to genetic and developmental
differences; c) diversity of imaging modalities and processing pipelines. We
use this diversity to our advantage by introducing the All-for-One training
recipe, which divides the challenging one-big-model problem into multiple small
models, with the small models aggregating the knowledge while preserving the
distinction between the different functional regions. Agnostic of the training
recipe, we use biological knowledge of the brain, specifically retinotopy, to
introduce inductive bias to learn a 3D brain-to-image mapping that ensures a)
each neuron knows which image regions and semantic levels to gather
information, and b) no neurons are left behind in the model.
We pre-trained a brain encoding model using over one million data points from
five public datasets spanning three imaging modalities. To the best of our
knowledge, this is the most comprehensive brain encoding model to the date. We
demonstrate the effectiveness of the pre-trained model as a drop-in replacement
for commonly used vision backbone models. Furthermore, we demonstrate the
application of the model to brain decoding. Code and the model checkpoint will
be made available.
Related papers
- Towards Neural Foundation Models for Vision: Aligning EEG, MEG, and fMRI Representations for Decoding, Encoding, and Modality Conversion [0.11249583407496218]
This paper presents a novel approach towards creating a foundational model for aligning neural data and visual stimuli across multimodal representationsof brain activity by leveraging contrastive learning.
We used electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) data.
Our framework's capabilities are demonstrated through three key experiments: decoding visual information from neural data, encoding images into neural representations, and converting between neural modalities.
arXiv Detail & Related papers (2024-11-14T12:27:27Z) - A Differentiable Approach to Multi-scale Brain Modeling [3.5874544981360987]
We present a multi-scale differentiable brain modeling workflow utilizing BrainPy, a unique differentiable brain simulator.
At the single-neuron level, we implement differentiable neuron models and employ gradient methods to optimize their fit to electrophysiological data.
On the network level, we incorporate connectomic data to construct biologically constrained network models.
arXiv Detail & Related papers (2024-06-28T07:41:31Z) - BrainSegFounder: Towards 3D Foundation Models for Neuroimage Segmentation [6.5388528484686885]
This study introduces a novel approach towards the creation of medical foundation models.
Our method involves a novel two-stage pretraining approach using vision transformers.
BrainFounder demonstrates a significant performance gain, surpassing the achievements of previous winning solutions.
arXiv Detail & Related papers (2024-06-14T19:49:45Z) - BrainODE: Dynamic Brain Signal Analysis via Graph-Aided Neural Ordinary Differential Equations [67.79256149583108]
We propose a novel model called BrainODE to achieve continuous modeling of dynamic brain signals.
By learning latent initial values and neural ODE functions from irregular time series, BrainODE effectively reconstructs brain signals at any time point.
arXiv Detail & Related papers (2024-04-30T10:53:30Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Brain Captioning: Decoding human brain activity into images and text [1.5486926490986461]
We present an innovative method for decoding brain activity into meaningful images and captions.
Our approach takes advantage of cutting-edge image captioning models and incorporates a unique image reconstruction pipeline.
We evaluate our methods using quantitative metrics for both generated captions and images.
arXiv Detail & Related papers (2023-05-19T09:57:19Z) - Decoding Visual Neural Representations by Multimodal Learning of
Brain-Visual-Linguistic Features [9.783560855840602]
This paper presents a generic neural decoding method called BraVL that uses multimodal learning of brain-visual-linguistic features.
We focus on modeling the relationships between brain, visual and linguistic features via multimodal deep generative models.
In particular, our BraVL model can be trained under various semi-supervised scenarios to incorporate the visual and textual features obtained from the extra categories.
arXiv Detail & Related papers (2022-10-13T05:49:33Z) - Multimodal foundation models are better simulators of the human brain [65.10501322822881]
We present a newly-designed multimodal foundation model pre-trained on 15 million image-text pairs.
We find that both visual and lingual encoders trained multimodally are more brain-like compared with unimodal ones.
arXiv Detail & Related papers (2022-08-17T12:36:26Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Neural Language Models are not Born Equal to Fit Brain Data, but
Training Helps [75.84770193489639]
We examine the impact of test loss, training corpus and model architecture on the prediction of functional Magnetic Resonance Imaging timecourses of participants listening to an audiobook.
We find that untrained versions of each model already explain significant amount of signal in the brain by capturing similarity in brain responses across identical words.
We suggest good practices for future studies aiming at explaining the human language system using neural language models.
arXiv Detail & Related papers (2022-07-07T15:37:17Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.