Deep Recurrent Encoder: A scalable end-to-end network to model brain
signals
- URL: http://arxiv.org/abs/2103.02339v1
- Date: Wed, 3 Mar 2021 11:39:17 GMT
- Title: Deep Recurrent Encoder: A scalable end-to-end network to model brain
signals
- Authors: Omar Chehab, Alexandre Defossez, Jean-Christophe Loiseau, Alexandre
Gramfort, Jean-Remi King
- Abstract summary: We propose an end-to-end deep learning architecture trained to predict the brain responses of multiple subjects at once.
We successfully test this approach on a large cohort of magnetoencephalography (MEG) recordings acquired during a one-hour reading task.
- Score: 122.1055193683784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding how the brain responds to sensory inputs is challenging: brain
recordings are partial, noisy, and high dimensional; they vary across sessions
and subjects and they capture highly nonlinear dynamics. These challenges have
led the community to develop a variety of preprocessing and analytical (almost
exclusively linear) methods, each designed to tackle one of these issues.
Instead, we propose to address these challenges through a specific end-to-end
deep learning architecture, trained to predict the brain responses of multiple
subjects at once. We successfully test this approach on a large cohort of
magnetoencephalography (MEG) recordings acquired during a one-hour reading
task. Our Deep Recurrent Encoding (DRE) architecture reliably predicts MEG
responses to words with a three-fold improvement over classic linear methods.
To overcome the notorious issue of interpretability of deep learning, we
describe a simple variable importance analysis. When applied to DRE, this
method recovers the expected evoked responses to word length and word
frequency. The quantitative improvement of the present deep learning approach
paves the way to better understand the nonlinear dynamics of brain activity
from large datasets.
Related papers
- Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - See Through Their Minds: Learning Transferable Neural Representation from Cross-Subject fMRI [32.40827290083577]
Deciphering visual content from functional Magnetic Resonance Imaging (fMRI) helps illuminate the human vision system.
Previous approaches primarily employ subject-specific models, sensitive to training sample size.
We propose shallow subject-specific adapters to map cross-subject fMRI data into unified representations.
During training, we leverage both visual and textual supervision for multi-modal brain decoding.
arXiv Detail & Related papers (2024-03-11T01:18:49Z) - Aligning brain functions boosts the decoding of visual semantics in
novel subjects [3.226564454654026]
We propose to boost brain decoding by aligning brain responses to videos and static images across subjects.
Our method improves out-of-subject decoding performance by up to 75%.
It also outperforms classical single-subject approaches when fewer than 100 minutes of data is available for the tested subject.
arXiv Detail & Related papers (2023-12-11T15:55:20Z) - Learning low-dimensional dynamics from whole-brain data improves task
capture [2.82277518679026]
We introduce a novel approach to learning low-dimensional approximations of neural dynamics by using a sequential variational autoencoder (SVAE)
Our method finds smooth dynamics that can predict cognitive processes with accuracy higher than classical methods.
We evaluate our approach on various task-fMRI datasets, including motor, working memory, and relational processing tasks.
arXiv Detail & Related papers (2023-05-18T18:43:13Z) - Dynamically Modular and Sparse General Continual Learning [13.976220447055521]
We introduce dynamic modularity and sparsity (Dynamos) for rehearsal-based general continual learning.
We show that our method learns representations that are modular and specialized, while maintaining reusability by activating subsets of neurons with overlaps corresponding to the similarity of stimuli.
arXiv Detail & Related papers (2023-01-02T12:24:24Z) - Decoding speech perception from non-invasive brain recordings [48.46819575538446]
We introduce a model trained with contrastive-learning to decode self-supervised representations of perceived speech from non-invasive recordings.
Our model can identify, from 3 seconds of MEG signals, the corresponding speech segment with up to 41% accuracy out of more than 1,000 distinct possibilities.
arXiv Detail & Related papers (2022-08-25T10:01:43Z) - Model-based analysis of brain activity reveals the hierarchy of language
in 305 subjects [82.81964713263483]
A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli.
Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli.
arXiv Detail & Related papers (2021-10-12T15:30:21Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z) - Deep learning approaches for neural decoding: from CNNs to LSTMs and
spikes to fMRI [2.0178765779788495]
Decoding behavior, perception, or cognitive state directly from neural signals has applications in brain-computer interface research.
In the last decade, deep learning has become the state-of-the-art method in many machine learning tasks.
Deep learning has been shown to be a useful tool for improving the accuracy and flexibility of neural decoding across a wide range of tasks.
arXiv Detail & Related papers (2020-05-19T18:10:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.