Graph-Based Learning of Spectro-Topographical EEG Representations with Gradient Alignment for Brain-Computer Interfaces
- URL: http://arxiv.org/abs/2512.07820v1
- Date: Mon, 08 Dec 2025 18:54:11 GMT
- Title: Graph-Based Learning of Spectro-Topographical EEG Representations with Gradient Alignment for Brain-Computer Interfaces
- Authors: Prithila Angkan, Amin Jalali, Paul Hungler, Ali Etemad,
- Abstract summary: We present a novel graph-based learning of EEG representations with gradient alignment (GEEGA)<n>Our model leverages graph convolutional networks to fuse embeddings from frequency-based topographical maps and time-frequency spectrograms.<n>We validate the efficacy of our method through extensive experiments on three publicly available EEG datasets.
- Score: 31.418253191692756
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a novel graph-based learning of EEG representations with gradient alignment (GEEGA) that leverages multi-domain information to learn EEG representations for brain-computer interfaces. Our model leverages graph convolutional networks to fuse embeddings from frequency-based topographical maps and time-frequency spectrograms, capturing inter-domain relationships. GEEGA addresses the challenge of achieving high inter-class separability, which arises from the temporally dynamic and subject-sensitive nature of EEG signals by incorporating the center loss and pairwise difference loss. Additionally, GEEGA incorporates a gradient alignment strategy to resolve conflicts between gradients from different domains and the fused embeddings, ensuring that discrepancies, where gradients point in conflicting directions, are aligned toward a unified optimization direction. We validate the efficacy of our method through extensive experiments on three publicly available EEG datasets: BCI-2a, CL-Drive and CLARE. Comprehensive ablation studies further highlight the impact of various components of our model.
Related papers
- Geometry- and Relation-Aware Diffusion for EEG Super-Resolution [33.53397341962788]
TopoDiff is a geometry- and relation-aware diffusion model for EEG spatial super-resolution.<n>Inspired by how human experts interpret spatial EEG patterns, TopoDiff incorporates topology-aware image embeddings.<n>This design yields a spatially grounded EEG spatial super-resolution framework with consistent performance improvements.
arXiv Detail & Related papers (2026-02-02T15:44:20Z) - Multi-Domain EEG Representation Learning with Orthogonal Mapping and Attention-based Fusion for Cognitive Load Classification [31.418253191692756]
We propose a new representation learning solution for the classification of cognitive load based on Electroencephalogram (EEG)<n>Our method integrates both time and frequency domains by first passing the raw EEG signals through the convolutional encoder.<n>Our results demonstrate the superiority of our multi-domain approach over the traditional single-domain techniques.
arXiv Detail & Related papers (2025-11-16T00:00:31Z) - Spatial-Functional awareness Transformer-based graph archetype contrastive learning for Decoding Visual Neural Representations from EEG [3.661246946935037]
We propose a Spatial-Functional Awareness Transformer-based Graph Archetype Contrastive Learning (SFTG) framework to enhance EEG-based visual decoding.<n>Specifically, we introduce the EEG Graph Transformer (EGT), a novel graph-based neural architecture that simultaneously encodes spatial brain connectivity and temporal neural dynamics.<n>To mitigate high intra-subject variability, we propose Graph Archetype Contrastive Learning (GAC), which learns subject-specific EEG graph archetypes to improve feature consistency and class separability.
arXiv Detail & Related papers (2025-09-29T13:27:55Z) - CRIA: A Cross-View Interaction and Instance-Adapted Pre-training Framework for Generalizable EEG Representations [52.251569042852815]
CRIA is an adaptive framework that utilizes variable-length and variable-channel coding to achieve a unified representation of EEG data across different datasets.<n>The model employs a cross-attention mechanism to fuse temporal, spectral, and spatial features effectively.<n> Experimental results on the Temple University EEG corpus and the CHB-MIT dataset show that CRIA outperforms existing methods with the same pre-training conditions.
arXiv Detail & Related papers (2025-06-19T06:31:08Z) - EEG2GAIT: A Hierarchical Graph Convolutional Network for EEG-based Gait Decoding [8.529597745689195]
Decoding gait dynamics from EEG signals presents significant challenges due to the complex spatial dependencies of motor processes.<n>We propose EEG2GAIT, a novel hierarchical graph-based model that captures multi-level spatial embeddings of EEG channels.<n>We also contribute a new Gait-EEG dataset, consisting of synchronized EEG and lower-limb joint angle data collected from 50 participants over two lab visits.
arXiv Detail & Related papers (2025-04-02T07:48:21Z) - Graph Structure Refinement with Energy-based Contrastive Learning [56.957793274727514]
We introduce an unsupervised method based on a joint of generative training and discriminative training to learn graph structure and representation.<n>We propose an Energy-based Contrastive Learning (ECL) guided Graph Structure Refinement (GSR) framework, denoted as ECL-GSR.<n>ECL-GSR achieves faster training with fewer samples and memories against the leading baseline, highlighting its simplicity and efficiency in downstream tasks.
arXiv Detail & Related papers (2024-12-20T04:05:09Z) - A Dynamic Domain Adaptation Deep Learning Network for EEG-based Motor
Imagery Classification [1.7465786776629872]
We propose a Dynamic Domain Adaptation Based Deep Learning Network (DADL-Net)
First, the EEG data is mapped to the three-dimensional geometric space and its temporal-spatial features are learned through the 3D convolution module.
The accuracy rates of 70.42% and 73.91% were achieved on the OpenBMI and BCIC IV 2a datasets.
arXiv Detail & Related papers (2023-09-21T01:34:00Z) - Diversified Multiscale Graph Learning with Graph Self-Correction [55.43696999424127]
We propose a diversified multiscale graph learning model equipped with two core ingredients.
A graph self-correction (GSC) mechanism to generate informative embedded graphs, and a diversity boosting regularizer (DBR) to achieve a comprehensive characterization of the input graph.
Experiments on popular graph classification benchmarks show that the proposed GSC mechanism leads to significant improvements over state-of-the-art graph pooling methods.
arXiv Detail & Related papers (2021-03-17T16:22:24Z) - Dynamic Graph Modeling of Simultaneous EEG and Eye-tracking Data for
Reading Task Identification [79.41619843969347]
We present a new approach, that we call AdaGTCN, for identifying human reader intent from Electroencephalogram(EEG) and Eye movement(EM) data.
Our method, Adaptive Graph Temporal Convolution Network (AdaGTCN), uses an Adaptive Graph Learning Layer and Deep Neighborhood Graph Convolution Layer.
We compare our approach with several baselines to report an improvement of 6.29% on the ZuCo 2.0 dataset, along with extensive ablation experiments.
arXiv Detail & Related papers (2021-02-21T18:19:49Z) - Multi-Level Graph Convolutional Network with Automatic Graph Learning
for Hyperspectral Image Classification [63.56018768401328]
We propose a Multi-level Graph Convolutional Network (GCN) with Automatic Graph Learning method (MGCN-AGL) for HSI classification.
By employing attention mechanism to characterize the importance among spatially neighboring regions, the most relevant information can be adaptively incorporated to make decisions.
Our MGCN-AGL encodes the long range dependencies among image regions based on the expressive representations that have been produced at local level.
arXiv Detail & Related papers (2020-09-19T09:26:20Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.