EEG to fMRI Synthesis: Is Deep Learning a candidate?
- URL: http://arxiv.org/abs/2009.14133v1
- Date: Tue, 29 Sep 2020 16:29:20 GMT
- Title: EEG to fMRI Synthesis: Is Deep Learning a candidate?
- Authors: David Calhas, Rui Henriques
- Abstract summary: This work provides the first comprehensive on how to use state-of-the-art principles from Neural Processing to synthesize fMRI data from electroencephalographic (EEG) view data.
A comparison of state-of-the-art synthesis approaches, including Autoencoders, Generative Adrial Networks and Pairwise Learning, is undertaken.
Results highlight the feasibility of EEG to fMRI brain image mappings, pinpointing the role of current advances in Machine Learning and showing the relevance of upcoming contributions to further improve performance.
- Score: 0.913755431537592
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Advances on signal, image and video generation underly major breakthroughs on
generative medical imaging tasks, including Brain Image Synthesis. Still, the
extent to which functional Magnetic Ressonance Imaging (fMRI) can be mapped
from the brain electrophysiology remains largely unexplored. This work provides
the first comprehensive view on how to use state-of-the-art principles from
Neural Processing to synthesize fMRI data from electroencephalographic (EEG)
data. Given the distinct spatiotemporal nature of haemodynamic and
electrophysiological signals, this problem is formulated as the task of
learning a mapping function between multivariate time series with highly
dissimilar structures. A comparison of state-of-the-art synthesis approaches,
including Autoencoders, Generative Adversarial Networks and Pairwise Learning,
is undertaken. Results highlight the feasibility of EEG to fMRI brain image
mappings, pinpointing the role of current advances in Machine Learning and
showing the relevance of upcoming contributions to further improve performance.
EEG to fMRI synthesis offers a way to enhance and augment brain image data, and
guarantee access to more affordable, portable and long-lasting protocols of
brain activity monitoring. The code used in this manuscript is available in
Github and the datasets are open source.
Related papers
- Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation [51.28453192441364]
Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology.
Current MR image synthesis approaches are typically trained on independent datasets for specific tasks.
We present TUMSyn, a Text-guided Universal MR image Synthesis model, which can flexibly generate brain MR images.
arXiv Detail & Related papers (2024-09-25T11:14:47Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - Synthetic Brain Images: Bridging the Gap in Brain Mapping With Generative Adversarial Model [0.0]
This work investigates the use of Deep Convolutional Generative Adversarial Networks (DCGAN) for producing high-fidelity and realistic MRI image slices.
While the discriminator network discerns between created and real slices, the generator network learns to synthesise realistic MRI image slices.
The generator refines its capacity to generate slices that closely mimic real MRI data through an adversarial training approach.
arXiv Detail & Related papers (2024-04-11T05:06:51Z) - Leveraging sinusoidal representation networks to predict fMRI signals
from EEG [3.3121941932506473]
We propose a novel architecture that can predict fMRI signals directly from multi-channel EEG without explicit feature engineering.
Our model achieves this by implementing a Sinusoidal Representation Network (SIREN) to learn frequency information in brain dynamics.
We evaluate our model using a simultaneous EEG-fMRI dataset with 8 subjects and investigate its potential for predicting subcortical fMRI signals.
arXiv Detail & Related papers (2023-11-06T03:16:18Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Generative Adversarial Networks for Brain Images Synthesis: A Review [2.609784101826762]
In medical imaging, image synthesis is the estimation process of one image (sequence, modality) from another image (sequence, modality)
generative adversarial network (GAN) as one of the most popular generative-based deep learning methods.
We summarized the recent developments of GANs for cross-modality brain image synthesis including CT to PET, CT to MRI, MRI to PET, and vice versa.
arXiv Detail & Related papers (2023-05-16T17:28:06Z) - Joint fMRI Decoding and Encoding with Latent Embedding Alignment [77.66508125297754]
We introduce a unified framework that addresses both fMRI decoding and encoding.
Our model concurrently recovers visual stimuli from fMRI signals and predicts brain activity from images within a unified framework.
arXiv Detail & Related papers (2023-03-26T14:14:58Z) - BrainCLIP: Bridging Brain and Visual-Linguistic Representation Via CLIP
for Generic Natural Visual Stimulus Decoding [51.911473457195555]
BrainCLIP is a task-agnostic fMRI-based brain decoding model.
It bridges the modality gap between brain activity, image, and text.
BrainCLIP can reconstruct visual stimuli with high semantic fidelity.
arXiv Detail & Related papers (2023-02-25T03:28:54Z) - DynDepNet: Learning Time-Varying Dependency Structures from fMRI Data
via Dynamic Graph Structure Learning [58.94034282469377]
We propose DynDepNet, a novel method for learning the optimal time-varying dependency structure of fMRI data induced by downstream prediction tasks.
Experiments on real-world fMRI datasets, for the task of sex classification, demonstrate that DynDepNet achieves state-of-the-art results.
arXiv Detail & Related papers (2022-09-27T16:32:11Z) - Functional Magnetic Resonance Imaging data augmentation through
conditional ICA [44.483210864902304]
We introduce Conditional Independent Components Analysis (Conditional ICA): a fast functional Magnetic Resonance Imaging (fMRI) data augmentation technique.
We show that Conditional ICA is successful at synthesizing data indistinguishable from observations, and that it yields gains in classification accuracy in brain decoding problems.
arXiv Detail & Related papers (2021-07-11T22:36:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.