Learning shared neural manifolds from multi-subject FMRI data
- URL: http://arxiv.org/abs/2201.00622v1
- Date: Wed, 22 Dec 2021 23:08:39 GMT
- Title: Learning shared neural manifolds from multi-subject FMRI data
- Authors: Jessie Huang, Erica L. Busch, Tom Wallenstein, Michal Gerasimiuk,
Andrew Benz, Guillaume Lajoie, Guy Wolf, Nicholas B. Turk-Browne, Smita
Krishnaswamy
- Abstract summary: We propose a neural network called MRMD-AEmani that learns a common embedding from multiple subjects in an experiment.
We show that our learned common space represents antemporal manifold (where new points not seen during training can be mapped), improves the classification of stimulus features of unseen timepoints.
We believe this framework can be used for many downstream applications such as guided brain-computer interface (BCI) training in the future.
- Score: 13.093635609349874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Functional magnetic resonance imaging (fMRI) is a notoriously noisy
measurement of brain activity because of the large variations between
individuals, signals marred by environmental differences during collection, and
spatiotemporal averaging required by the measurement resolution. In addition,
the data is extremely high dimensional, with the space of the activity
typically having much lower intrinsic dimension. In order to understand the
connection between stimuli of interest and brain activity, and analyze
differences and commonalities between subjects, it becomes important to learn a
meaningful embedding of the data that denoises, and reveals its intrinsic
structure. Specifically, we assume that while noise varies significantly
between individuals, true responses to stimuli will share common,
low-dimensional features between subjects which are jointly discoverable.
Similar approaches have been exploited previously but they have mainly used
linear methods such as PCA and shared response modeling (SRM). In contrast, we
propose a neural network called MRMD-AE (manifold-regularized multiple decoder,
autoencoder), that learns a common embedding from multiple subjects in an
experiment while retaining the ability to decode to individual raw fMRI
signals. We show that our learned common space represents an extensible
manifold (where new points not seen during training can be mapped), improves
the classification accuracy of stimulus features of unseen timepoints, as well
as improves cross-subject translation of fMRI signals. We believe this
framework can be used for many downstream applications such as guided
brain-computer interface (BCI) training in the future.
Related papers
- Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - BrainMAE: A Region-aware Self-supervised Learning Framework for Brain Signals [11.030708270737964]
We propose Brain Masked Auto-Encoder (BrainMAE) for learning representations directly from fMRI time-series data.
BrainMAE consistently outperforms established baseline methods by significant margins in four distinct downstream tasks.
arXiv Detail & Related papers (2024-06-24T19:16:24Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - MBrain: A Multi-channel Self-Supervised Learning Framework for Brain
Signals [7.682832730967219]
We study the self-supervised learning framework for brain signals that can be applied to pre-train either SEEG or EEG data.
Inspired by this, we propose MBrain to learn implicit spatial and temporal correlations between different channels.
Our model outperforms several state-of-the-art time series SSL and unsupervised models, and has the ability to be deployed to clinical practice.
arXiv Detail & Related papers (2023-06-15T09:14:26Z) - Deep Representations for Time-varying Brain Datasets [4.129225533930966]
This paper builds an efficient graph neural network model that incorporates both region-mapped fMRI sequences and structural connectivities as inputs.
We find good representations of the latent brain dynamics through learning sample-level adaptive adjacency matrices.
These modules can be easily adapted to and are potentially useful for other applications outside the neuroscience domain.
arXiv Detail & Related papers (2022-05-23T21:57:31Z) - EEGminer: Discovering Interpretable Features of Brain Activity with
Learnable Filters [72.19032452642728]
We propose a novel differentiable EEG decoding pipeline consisting of learnable filters and a pre-determined feature extraction module.
We demonstrate the utility of our model towards emotion recognition from EEG signals on the SEED dataset and on a new EEG dataset of unprecedented size.
The discovered features align with previous neuroscience studies and offer new insights, such as marked differences in the functional connectivity profile between left and right temporal areas during music listening.
arXiv Detail & Related papers (2021-10-19T14:22:04Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - Mapping individual differences in cortical architecture using multi-view
representation learning [0.0]
We introduce a novel machine learning method which allows combining the activation-and connectivity-based information respectively measured through task-fMRI and resting-state fMRI.
It combines a multi-view deep autoencoder which is designed to fuse the two fMRI modalities into a joint representation space within which a predictive model is trained to guess a scalar score that characterizes the patient.
arXiv Detail & Related papers (2020-04-01T09:01:25Z) - Towards a predictive spatio-temporal representation of brain data [0.2580765958706854]
We show that fMRI datasets are constituted by complex and highly heterogeneous timeseries.
We compare various modelling techniques from deep learning and geometric deep learning to pave the way for future research.
We hope that our methodological advances can ultimately be clinically and computationally relevant by leading to a more nuanced understanding of the brain dynamics in health and disease.
arXiv Detail & Related papers (2020-02-29T18:49:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.