fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships
- URL: http://arxiv.org/abs/2211.02024v1
- Date: Sun, 23 Oct 2022 15:11:37 GMT
- Title: fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships
- Authors: Alexander Kovalev, Ilia Mikheev, Alexei Ossadtchi
- Abstract summary: We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The access to activity of subcortical structures offers unique opportunity
for building intention dependent brain-computer interfaces, renders abundant
options for exploring a broad range of cognitive phenomena in the realm of
affective neuroscience including complex decision making processes and the
eternal free-will dilemma and facilitates diagnostics of a range of
neurological deceases. So far this was possible only using bulky, expensive and
immobile fMRI equipment. Here we present an interpretable domain grounded
solution to recover the activity of several subcortical regions from the
multichannel EEG data and demonstrate up to 60% correlation between the actual
subcortical blood oxygenation level dependent sBOLD signal and its EEG-derived
twin. Then, using the novel and theoretically justified weight interpretation
methodology we recover individual spatial and time-frequency patterns of scalp
EEG predictive of the hemodynamic signal in the subcortical nuclei. The
described results not only pave the road towards wearable subcortical activity
scanners but also showcase an automatic knowledge discovery process facilitated
by deep learning technology in combination with an interpretable domain
constrained architecture and the appropriate downstream task.
Related papers
- Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject Emotion Recognition [23.505616142198487]
We develop a Pre-trained model based Multimodal Mood Reader for cross-subject emotion recognition.
The model learns universal latent representations of EEG signals through pre-training on large scale dataset.
Extensive experiments on public datasets demonstrate Mood Reader's superior performance in cross-subject emotion recognition tasks.
arXiv Detail & Related papers (2024-05-28T14:31:11Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Leveraging sinusoidal representation networks to predict fMRI signals
from EEG [3.3121941932506473]
We propose a novel architecture that can predict fMRI signals directly from multi-channel EEG without explicit feature engineering.
Our model achieves this by implementing a Sinusoidal Representation Network (SIREN) to learn frequency information in brain dynamics.
We evaluate our model using a simultaneous EEG-fMRI dataset with 8 subjects and investigate its potential for predicting subcortical fMRI signals.
arXiv Detail & Related papers (2023-11-06T03:16:18Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - A Knowledge-Driven Cross-view Contrastive Learning for EEG
Representation [48.85731427874065]
This paper proposes a knowledge-driven cross-view contrastive learning framework (KDC2) to extract effective representations from EEG with limited labels.
The KDC2 method creates scalp and neural views of EEG signals, simulating the internal and external representation of brain activity.
By modeling prior neural knowledge based on neural information consistency theory, the proposed method extracts invariant and complementary neural knowledge to generate combined representations.
arXiv Detail & Related papers (2023-09-21T08:53:51Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Hierarchical Graph Convolutional Network Built by Multiscale Atlases for
Brain Disorder Diagnosis Using Functional Connectivity [48.75665245214903]
We propose a novel framework to perform multiscale FCN analysis for brain disorder diagnosis.
We first use a set of well-defined multiscale atlases to compute multiscale FCNs.
Then, we utilize biologically meaningful brain hierarchical relationships among the regions in multiscale atlases to perform nodal pooling.
arXiv Detail & Related papers (2022-09-22T04:17:57Z) - Mapping individual differences in cortical architecture using multi-view
representation learning [0.0]
We introduce a novel machine learning method which allows combining the activation-and connectivity-based information respectively measured through task-fMRI and resting-state fMRI.
It combines a multi-view deep autoencoder which is designed to fuse the two fMRI modalities into a joint representation space within which a predictive model is trained to guess a scalar score that characterizes the patient.
arXiv Detail & Related papers (2020-04-01T09:01:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.