Inferring Brain Dynamics via Multimodal Joint Graph Representation
EEG-fMRI
- URL: http://arxiv.org/abs/2201.08747v1
- Date: Fri, 21 Jan 2022 15:39:48 GMT
- Title: Inferring Brain Dynamics via Multimodal Joint Graph Representation
EEG-fMRI
- Authors: Jalal Mirakhorli
- Abstract summary: We show that multi-modeling methods can provide new insights into the neural analysis of brain components that are not possible when each modality is acquired separately.
The joint representations of different modalities is a robust model to analyze simultaneously acquired electroencephalography and magnetic resonance imaging (EEG-fMRI)
We outline the correlations of several different media in time from one source with graph-based deep learning methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have shown that multi-modeling methods can provide new
insights into the analysis of brain components that are not possible when each
modality is acquired separately. The joint representations of different
modalities is a robust model to analyze simultaneously acquired
electroencephalography and functional magnetic resonance imaging (EEG-fMRI).
Advances in precision instruments have given us the ability to observe the
spatiotemporal neural dynamics of the human brain through non-invasive
neuroimaging techniques such as EEG & fMRI. Nonlinear fusion methods of streams
can extract effective brain components in different dimensions of temporal and
spatial. Graph-based analyzes, which have many similarities to brain structure,
can overcome the complexities of brain mapping analysis. Throughout, we outline
the correlations of several different media in time shifts from one source with
graph-based and deep learning methods. Determining overlaps can provide a new
perspective for diagnosing functional changes in neuroplasticity studies.
Related papers
- Latent Representation Learning for Multimodal Brain Activity Translation [14.511112110420271]
We present the Spatiotemporal Alignment of Multimodal Brain Activity (SAMBA) framework, which bridges the spatial and temporal resolution gaps across modalities.
SAMBA introduces a novel attention-based wavelet decomposition for spectral filtering of electrophysiological recordings.
We show that the training of SAMBA, aside from achieving translation, also learns a rich representation of brain information processing.
arXiv Detail & Related papers (2024-09-27T05:50:29Z) - Integrated Brain Connectivity Analysis with fMRI, DTI, and sMRI Powered by Interpretable Graph Neural Networks [17.063133885403154]
We integrate functional magnetic resonance imaging, diffusion tensor imaging, and structural MRI into a cohesive framework.
Our approach incorporates a masking strategy to differentially weight neural connections, thereby facilitating a holistic amalgamation of multimodal imaging data.
The model is applied to the Human Connectome Project's Development study to elucidate the associations between multimodal imaging and cognitive functions throughout youth.
arXiv Detail & Related papers (2024-08-26T13:16:42Z) - CATD: Unified Representation Learning for EEG-to-fMRI Cross-Modal Generation [6.682531937245544]
This paper proposes the Condition-Aligned Temporal Diffusion (CATD) framework for end-to-end cross-modal synthesis of neuroimaging.
The proposed framework establishes a new paradigm for cross-modal synthesis of neuroimaging.
It shows promise in medical applications such as improving Parkinson's disease prediction and identifying abnormal brain regions.
arXiv Detail & Related papers (2024-07-16T11:31:38Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - BrainODE: Dynamic Brain Signal Analysis via Graph-Aided Neural Ordinary Differential Equations [67.79256149583108]
We propose a novel model called BrainODE to achieve continuous modeling of dynamic brain signals.
By learning latent initial values and neural ODE functions from irregular time series, BrainODE effectively reconstructs brain signals at any time point.
arXiv Detail & Related papers (2024-04-30T10:53:30Z) - Psychometry: An Omnifit Model for Image Reconstruction from Human Brain Activity [60.983327742457995]
Reconstructing the viewed images from human brain activity bridges human and computer vision through the Brain-Computer Interface.
We devise Psychometry, an omnifit model for reconstructing images from functional Magnetic Resonance Imaging (fMRI) obtained from different subjects.
arXiv Detail & Related papers (2024-03-29T07:16:34Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Ranking of Communities in Multiplex Spatiotemporal Models of Brain
Dynamics [0.0]
We propose an interpretation of neural HMMs as multiplex brain state graph models we term Hidden Markov Graph Models (HMs)
This interpretation allows for dynamic brain activity to be analysed using the full repertoire of network analysis techniques.
We produce a new tool for determining important communities of brain regions using a random walk-based procedure.
arXiv Detail & Related papers (2022-03-17T12:14:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.