RepSPD: Enhancing SPD Manifold Representation in EEGs via Dynamic Graphs
- URL: http://arxiv.org/abs/2602.22981v1
- Date: Thu, 26 Feb 2026 13:22:19 GMT
- Title: RepSPD: Enhancing SPD Manifold Representation in EEGs via Dynamic Graphs
- Authors: Haohui Jia, Zheng Chen, Lingwei Zhu, Xu Cao, Yasuko Matsubara, Takashi Matsubara, Yasushi Sakurai,
- Abstract summary: Decoding brain activity from electroencephalography (EEG) is crucial for neuroscience and clinical applications.<n>We propose RepSPD, a novel geometric deep learning (GDL)-based model.<n>We introduce a global bidirectional alignment strategy to reshape tangent-space embeddings, mitigating geometric distortions caused by curvature and thereby enhancing geometric consistency.
- Score: 33.87495510816597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decoding brain activity from electroencephalography (EEG) is crucial for neuroscience and clinical applications. Among recent advances in deep learning for EEG, geometric learning stands out as its theoretical underpinnings on symmetric positive definite (SPD) allows revealing structural connectivity analysis in a physics-grounded manner. However, current SPD-based methods focus predominantly on statistical aggregation of EEGs, with frequency-specific synchronization and local topological structures of brain regions neglected. Given this, we propose RepSPD, a novel geometric deep learning (GDL)-based model. RepSPD implements a cross-attention mechanism on the Riemannian manifold to modulate the geometric attributes of SPD with graph-derived functional connectivity features. On top of this, we introduce a global bidirectional alignment strategy to reshape tangent-space embeddings, mitigating geometric distortions caused by curvature and thereby enhancing geometric consistency. Extensive experiments demonstrate that our proposed framework significantly outperforms existing EEG representation methods, exhibiting superior robustness and generalization capabilities.
Related papers
- ManifoldFormer: Geometric Deep Learning for Neural Dynamics on Riemannian Manifolds [11.275535457399625]
Existing EEG foundation models mainly treat neural signals as generic time series in Euclidean space.<n>MandelaFormer addresses this limitation through a novel geometric deep learning framework that explicitly learns neural manifold representations.
arXiv Detail & Related papers (2025-11-20T22:19:53Z) - Geometry-aware Active Learning of Spatiotemporal Dynamic Systems [4.251030047034566]
This paper proposes a geometry-aware active learning framework for modeling dynamic systems.<n>We develop an adaptive active learning strategy to strategically identify spatial locations for data collection and further maximize the prediction accuracy.
arXiv Detail & Related papers (2025-04-26T19:56:38Z) - SPD Learning for Covariance-Based Neuroimaging Analysis: Perspectives, Methods, and Challenges [41.955864444491965]
Neuroimaging provides a critical framework for characterizing brain activity by quantifying connectivity patterns and functional architecture across modalities.<n>Modern machine learning has significantly advanced our understanding of neural processing mechanisms through these datasets.<n>This review focuses on machine learning approaches for covariance-based neuroimaging data, where often symmetric positive definite (SPD) matrices under full-rank conditions encode inter-channel relationships.
arXiv Detail & Related papers (2025-04-26T10:05:04Z) - Scalable Geometric Learning with Correlation-Based Functional Brain Networks [0.0]
The correlation matrix is a central representation of functional brain networks in neuroimaging.<n>Traditional analyses often treat pairwise interactions independently in a Euclidean setting.<n>This paper presents a novel geometric framework that embeds correlation matrices into a Euclidean space.
arXiv Detail & Related papers (2025-03-31T01:35:50Z) - EEG-ReMinD: Enhancing Neurodegenerative EEG Decoding through Self-Supervised State Reconstruction-Primed Riemannian Dynamics [24.57253767771542]
We propose a novel two-stage approach to EEG decoding called EEG-ReMinD.<n>EEG-ReMinD mitigates reliance on supervised learning and integrates inherent geometric features.<n>It efficiently handles EEG data corruptions and reduces the dependency on labels.
arXiv Detail & Related papers (2025-01-14T14:19:40Z) - Bridging Geometric States via Geometric Diffusion Bridge [79.60212414973002]
We introduce the Geometric Diffusion Bridge (GDB), a novel generative modeling framework that accurately bridges initial and target geometric states.
GDB employs an equivariant diffusion bridge derived by a modified version of Doob's $h$-transform for connecting geometric states.
We show that GDB surpasses existing state-of-the-art approaches, opening up a new pathway for accurately bridging geometric states.
arXiv Detail & Related papers (2024-10-31T17:59:53Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - A Knowledge-Driven Cross-view Contrastive Learning for EEG
Representation [48.85731427874065]
This paper proposes a knowledge-driven cross-view contrastive learning framework (KDC2) to extract effective representations from EEG with limited labels.
The KDC2 method creates scalp and neural views of EEG signals, simulating the internal and external representation of brain activity.
By modeling prior neural knowledge based on neural information consistency theory, the proposed method extracts invariant and complementary neural knowledge to generate combined representations.
arXiv Detail & Related papers (2023-09-21T08:53:51Z) - Adaptive Log-Euclidean Metrics for SPD Matrix Learning [73.12655932115881]
We propose Adaptive Log-Euclidean Metrics (ALEMs), which extend the widely used Log-Euclidean Metric (LEM)
The experimental and theoretical results demonstrate the merit of the proposed metrics in improving the performance of SPD neural networks.
arXiv Detail & Related papers (2023-03-26T18:31:52Z) - MAtt: A Manifold Attention Network for EEG Decoding [0.966840768820136]
We propose a novel geometric learning (GDL)-based model for EEG decoding, featuring a manifold attention network (mAtt)
The evaluation of MAtt on both time-synchronous and -asyncronous EEG datasets suggests its superiority over other leading DL methods for general EEG decoding.
arXiv Detail & Related papers (2022-10-05T02:26:31Z) - Hyperbolic Neural Networks++ [66.16106727715061]
We generalize the fundamental components of neural networks in a single hyperbolic geometry model, namely, the Poincar'e ball model.
Experiments show the superior parameter efficiency of our methods compared to conventional hyperbolic components, and stability and outperformance over their Euclidean counterparts.
arXiv Detail & Related papers (2020-06-15T08:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.