Cross-Modal Computational Model of Brain-Heart Interactions via HRV and EEG Feature
- URL: http://arxiv.org/abs/2601.06792v1
- Date: Sun, 11 Jan 2026 07:20:30 GMT
- Title: Cross-Modal Computational Model of Brain-Heart Interactions via HRV and EEG Feature
- Authors: Malavika Pradeep, Akshay Sasi, Nusaibah Farrukh, Rahul Venugopal, Elizabeth Sherly,
- Abstract summary: ECG signals are feasible on wearable equipment pieces such as headbands.<n>This study investigates whether ECG-derived features can serve as surrogate indicators of cognitive load.
- Score: 0.1631115063641726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The electroencephalogram (EEG) has been the gold standard for quantifying mental workload; however, due to its complexity and non-portability, it can be constraining. ECG signals, which are feasible on wearable equipment pieces such as headbands, present a promising method for cognitive state monitoring. This research explores whether electrocardiogram (ECG) signals are able to indicate mental workload consistently and act as surrogates for EEG-based cognitive indicators. This study investigates whether ECG-derived features can serve as surrogate indicators of cognitive load, a concept traditionally quantified using EEG. Using a publicly available multimodal dataset (OpenNeuro) of EEG and ECG recorded during working-memory and listening tasks, features of HRV and Catch22 descriptors are extracted from ECG, and spectral band-power with Catch22 features from EEG. A cross-modal regression framework based on XGBoost was trained to map ECG-derived HRV representations to EEG-derived cognitive features. In order to address data sparsity and model brain-heart interactions, we integrated the PSV-SDG to produce EEG-conditioned synthetic HRV time series.This addresses the challenge of inferring cognitive load solely from ECG-derived features using a combination of multimodal learning, signal processing, and synthetic data generation. These outcomes form a basis for light, interpretable machine learning models that are implemented through wearable biosensors in non-lab environments. Synthetic HRV inclusion enhances robustness, particularly in sparse data situations. Overall, this work is an initiation for building low-cost, explainable, and real-time cognitive monitoring systems for mental health, education, and human-computer interaction, with a focus on ageing and clinical populations.
Related papers
- Unveiling the Heart-Brain Connection: An Analysis of ECG in Cognitive Performance [0.1631115063641726]
ECG signals can reliably reflect cognitive load and serve as proxies for EEG-based indicators.<n>We propose a cross-modal XGBoost framework to project the ECG features onto EEG-representative cognitive spaces.<n>Our findings underpin ECG as an interpretable, real-time, wearable solution for everyday cognitive monitoring.
arXiv Detail & Related papers (2026-01-04T08:06:19Z) - Leveraging Foundational Models and Simple Fusion for Multi-modal Physiological Signal Analysis [0.0]
We adapt the CBraMod encoder for large-scale self-supervised ECG pretraining.<n>We utilize a pre-trained CBraMod encoder for EEG and pre-train a symmetric ECG encoder.<n>Our approach achieves near state-of-the-art performance, demonstrating that carefully designed physiological encoders, even with straightforward fusion, substantially improve downstream performance.
arXiv Detail & Related papers (2025-12-17T09:49:06Z) - Simulator and Experience Enhanced Diffusion Model for Comprehensive ECG Generation [52.19347532840774]
We propose SE-Diff, a novel physiological simulator and experience enhanced diffusion model for ECG generation.<n> SE-Diff integrates a lightweight ordinary differential equation (ODE)-based ECG simulator into the diffusion process via a beat decoder.<n>Extensive experiments on real-world ECG datasets demonstrate that SE-Diff improves both signal fidelity and text-ECG semantic alignment.
arXiv Detail & Related papers (2025-11-13T02:57:10Z) - High-Fidelity Synthetic ECG Generation via Mel-Spectrogram Informed Diffusion Training [3.864395218585964]
Development of machine learning for cardiac care is hampered by privacy restrictions on sharing real patient electrocardiogram (ECG) data.<n>In this work, we address two major shortcomings of current generative ECG methods.<n>We build on a conditional diffusion-based Structured State Space Model (SSSD-ECG) with two principled innovations.
arXiv Detail & Related papers (2025-10-07T01:14:53Z) - EEG-MedRAG: Enhancing EEG-based Clinical Decision-Making via Hierarchical Hypergraph Retrieval-Augmented Generation [45.031633614714]
EEG-MedRAG is a three-layer hypergraph-based retrieval-augmented generation framework.<n>It unifies EEG domain knowledge, individual patient cases, and a large-scale repository into a traversable n-ary relational hypergraph.<n>We introduce the first cross-disease, cross-role EEG clinical QA benchmark, spanning seven disorders and five authentic clinical perspectives.
arXiv Detail & Related papers (2025-08-19T11:12:58Z) - BrainOmni: A Brain Foundation Model for Unified EEG and MEG Signals [46.121056431476156]
This paper proposes Brain Omni, the first brain foundation model that generalises across heterogeneous EEG and MEG recordings.<n>Existing approaches typically rely on separate, modality- and dataset-specific models, which limits performance and cross-domain scalability.<n>A total of 1,997 hours of EEG and 656 hours of MEG data are curated and standardised from publicly available sources for pretraining.
arXiv Detail & Related papers (2025-05-18T14:07:14Z) - EEG-GMACN: Interpretable EEG Graph Mutual Attention Convolutional Network [2.6684288899870543]
Graph Signal Processing has emerged as a promising method for EEG spatial-temporal analysis.<n>Existing GSP studies lack interpretability of electrode importance and the credibility of prediction confidence.<n>This work proposes an EEG Graph Mutual Attention Convolutional Network (EEG-GMACN) to output interpretable electrode graph weights.
arXiv Detail & Related papers (2024-12-15T13:37:20Z) - CognitionCapturer: Decoding Visual Stimuli From Human EEG Signal With Multimodal Information [61.1904164368732]
We propose CognitionCapturer, a unified framework that fully leverages multimodal data to represent EEG signals.<n>Specifically, CognitionCapturer trains Modality Experts for each modality to extract cross-modal information from the EEG modality.<n>The framework does not require any fine-tuning of the generative models and can be extended to incorporate more modalities.
arXiv Detail & Related papers (2024-12-13T16:27:54Z) - A Knowledge-Driven Cross-view Contrastive Learning for EEG
Representation [48.85731427874065]
This paper proposes a knowledge-driven cross-view contrastive learning framework (KDC2) to extract effective representations from EEG with limited labels.
The KDC2 method creates scalp and neural views of EEG signals, simulating the internal and external representation of brain activity.
By modeling prior neural knowledge based on neural information consistency theory, the proposed method extracts invariant and complementary neural knowledge to generate combined representations.
arXiv Detail & Related papers (2023-09-21T08:53:51Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - Leveraging Statistical Shape Priors in GAN-based ECG Synthesis [3.3482093430607267]
We propose a novel approach for ECG signal generation using Generative Adversarial Networks (GANs) and statistical ECG data modeling.
Our approach leverages prior knowledge about ECG dynamics to synthesize realistic signals, addressing the complex dynamics of ECG signals.
Our results demonstrate that our approach, which models temporal and amplitude variations of ECG signals as 2-D shapes, generates more realistic signals compared to state-of-the-art GAN based generation baselines.
arXiv Detail & Related papers (2022-10-22T18:06:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.