Leveraging sinusoidal representation networks to predict fMRI signals
from EEG
- URL: http://arxiv.org/abs/2311.04234v2
- Date: Thu, 25 Jan 2024 03:00:54 GMT
- Title: Leveraging sinusoidal representation networks to predict fMRI signals
from EEG
- Authors: Yamin Li, Ange Lou, Ziyuan Xu, Shiyu Wang, Catie Chang
- Abstract summary: We propose a novel architecture that can predict fMRI signals directly from multi-channel EEG without explicit feature engineering.
Our model achieves this by implementing a Sinusoidal Representation Network (SIREN) to learn frequency information in brain dynamics.
We evaluate our model using a simultaneous EEG-fMRI dataset with 8 subjects and investigate its potential for predicting subcortical fMRI signals.
- Score: 3.3121941932506473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In modern neuroscience, functional magnetic resonance imaging (fMRI) has been
a crucial and irreplaceable tool that provides a non-invasive window into the
dynamics of whole-brain activity. Nevertheless, fMRI is limited by hemodynamic
blurring as well as high cost, immobility, and incompatibility with metal
implants. Electroencephalography (EEG) is complementary to fMRI and can
directly record the cortical electrical activity at high temporal resolution,
but has more limited spatial resolution and is unable to recover information
about deep subcortical brain structures. The ability to obtain fMRI information
from EEG would enable cost-effective, imaging across a wider set of brain
regions. Further, beyond augmenting the capabilities of EEG, cross-modality
models would facilitate the interpretation of fMRI signals. However, as both
EEG and fMRI are high-dimensional and prone to artifacts, it is currently
challenging to model fMRI from EEG. To address this challenge, we propose a
novel architecture that can predict fMRI signals directly from multi-channel
EEG without explicit feature engineering. Our model achieves this by
implementing a Sinusoidal Representation Network (SIREN) to learn frequency
information in brain dynamics from EEG, which serves as the input to a
subsequent encoder-decoder to effectively reconstruct the fMRI signal from a
specific brain region. We evaluate our model using a simultaneous EEG-fMRI
dataset with 8 subjects and investigate its potential for predicting
subcortical fMRI signals. The present results reveal that our model outperforms
a recent state-of-the-art model, and indicates the potential of leveraging
periodic activation functions in deep neural networks to model functional
neuroimaging data.
Related papers
- NeuroBOLT: Resting-state EEG-to-fMRI Synthesis with Multi-dimensional Feature Mapping [9.423808859117122]
We introduce NeuroBOLT, i.e., Neuro-to-BOLD Transformer, to translate raw EEG data to fMRI activity signals across the brain.
Our experiments demonstrate that NeuroBOLT effectively reconstructs unseen resting-state fMRI signals from primary sensory, high-level cognitive areas, and deep subcortical brain regions.
arXiv Detail & Related papers (2024-10-07T02:47:55Z) - Guess What I Think: Streamlined EEG-to-Image Generation with Latent Diffusion Models [4.933734706786783]
EEG is a low-cost, non-invasive, and portable neuroimaging technique.
EEG presents inherent challenges due to its low spatial resolution and susceptibility to noise and artifacts.
We propose a framework based on the ControlNet adapter for conditioning a latent diffusion model through EEG signals.
arXiv Detail & Related papers (2024-09-17T19:07:13Z) - CATD: Unified Representation Learning for EEG-to-fMRI Cross-Modal Generation [6.682531937245544]
This paper proposes the Condition-Aligned Temporal Diffusion (CATD) framework for end-to-end cross-modal synthesis of neuroimaging.
The proposed framework establishes a new paradigm for cross-modal synthesis of neuroimaging.
It shows promise in medical applications such as improving Parkinson's disease prediction and identifying abnormal brain regions.
arXiv Detail & Related papers (2024-07-16T11:31:38Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Joint fMRI Decoding and Encoding with Latent Embedding Alignment [77.66508125297754]
We introduce a unified framework that addresses both fMRI decoding and encoding.
Our model concurrently recovers visual stimuli from fMRI signals and predicts brain activity from images within a unified framework.
arXiv Detail & Related papers (2023-03-26T14:14:58Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - Data and Physics Driven Learning Models for Fast MRI -- Fundamentals and
Methodologies from CNN, GAN to Attention and Transformers [72.047680167969]
This article aims to introduce the deep learning based data driven techniques for fast MRI including convolutional neural network and generative adversarial network based methods.
We will detail the research in coupling physics and data driven models for MRI acceleration.
Finally, we will demonstrate through a few clinical applications, explain the importance of data harmonisation and explainable models for such fast MRI techniques in multicentre and multi-scanner studies.
arXiv Detail & Related papers (2022-04-01T22:48:08Z) - EEG to fMRI Synthesis: Is Deep Learning a candidate? [0.913755431537592]
This work provides the first comprehensive on how to use state-of-the-art principles from Neural Processing to synthesize fMRI data from electroencephalographic (EEG) view data.
A comparison of state-of-the-art synthesis approaches, including Autoencoders, Generative Adrial Networks and Pairwise Learning, is undertaken.
Results highlight the feasibility of EEG to fMRI brain image mappings, pinpointing the role of current advances in Machine Learning and showing the relevance of upcoming contributions to further improve performance.
arXiv Detail & Related papers (2020-09-29T16:29:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.