EEG2GAIT: A Hierarchical Graph Convolutional Network for EEG-based Gait Decoding
- URL: http://arxiv.org/abs/2504.03757v1
- Date: Wed, 02 Apr 2025 07:48:21 GMT
- Title: EEG2GAIT: A Hierarchical Graph Convolutional Network for EEG-based Gait Decoding
- Authors: Xi Fu, Rui Liu, Aung Aung Phyo Wai, Hannah Pulferer, Neethu Robinson, Gernot R Müller-Putz, Cuntai Guan,
- Abstract summary: Decoding gait dynamics from EEG signals presents significant challenges due to the complex spatial dependencies of motor processes.<n>We propose EEG2GAIT, a novel hierarchical graph-based model that captures multi-level spatial embeddings of EEG channels.<n>We also contribute a new Gait-EEG dataset, consisting of synchronized EEG and lower-limb joint angle data collected from 50 participants over two lab visits.
- Score: 8.529597745689195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decoding gait dynamics from EEG signals presents significant challenges due to the complex spatial dependencies of motor processes, the need for accurate temporal and spectral feature extraction, and the scarcity of high-quality gait EEG datasets. To address these issues, we propose EEG2GAIT, a novel hierarchical graph-based model that captures multi-level spatial embeddings of EEG channels using a Hierarchical Graph Convolutional Network (GCN) Pyramid. To further improve decoding accuracy, we introduce a Hybrid Temporal-Spectral Reward (HTSR) loss function, which combines time-domain, frequency-domain, and reward-based loss components. Moreover, we contribute a new Gait-EEG Dataset (GED), consisting of synchronized EEG and lower-limb joint angle data collected from 50 participants over two lab visits. Validation experiments on both the GED and the publicly available Mobile Brain-body imaging (MoBI) dataset demonstrate that EEG2GAIT outperforms state-of-the-art methods and achieves the best joint angle prediction. Ablation studies validate the contributions of the hierarchical GCN modules and HTSR Loss, while saliency maps reveal the significance of motor-related brain regions in decoding tasks. These findings underscore EEG2GAIT's potential for advancing brain-computer interface applications, particularly in lower-limb rehabilitation and assistive technologies.
Related papers
- GEM: Empowering MLLM for Grounded ECG Understanding with Time Series and Images [43.65650710265957]
We introduce GEM, the first MLLM unifying ECG time series, 12-lead ECG images and text for grounded and clinician-aligned ECG interpretation.<n> GEM enables feature-grounded analysis, evidence-driven reasoning, and a clinician-like diagnostic process through three core innovations.<n>We propose the Grounded ECG task, a clinically motivated benchmark designed to assess the MLLM's capability in grounded ECG understanding.
arXiv Detail & Related papers (2025-03-08T05:48:53Z) - Spatio-Temporal Progressive Attention Model for EEG Classification in Rapid Serial Visual Presentation Task [38.949309627200904]
We propose a novel progressive attention model (STPAM) to improve EEG classification in rapid serial visual presentation.<n>The results show that ourAM can achieve better performance than all the compared methods.
arXiv Detail & Related papers (2025-02-02T09:28:38Z) - Graph Structure Refinement with Energy-based Contrastive Learning [56.957793274727514]
We introduce an unsupervised method based on a joint of generative training and discriminative training to learn graph structure and representation.
We propose an Energy-based Contrastive Learning (ECL) guided Graph Structure Refinement (GSR) framework, denoted as ECL-GSR.
ECL-GSR achieves faster training with fewer samples and memories against the leading baseline, highlighting its simplicity and efficiency in downstream tasks.
arXiv Detail & Related papers (2024-12-20T04:05:09Z) - CognitionCapturer: Decoding Visual Stimuli From Human EEG Signal With Multimodal Information [61.1904164368732]
We propose CognitionCapturer, a unified framework that fully leverages multimodal data to represent EEG signals.<n>Specifically, CognitionCapturer trains Modality Experts for each modality to extract cross-modal information from the EEG modality.<n>The framework does not require any fine-tuning of the generative models and can be extended to incorporate more modalities.
arXiv Detail & Related papers (2024-12-13T16:27:54Z) - hvEEGNet: exploiting hierarchical VAEs on EEG data for neuroscience
applications [3.031375888004876]
Two main issues challenge the existing DL-based modeling methods for EEG.
High variability between subjects and low signal-to-noise ratio make it difficult to ensure a good quality in the EEG data.
We propose two variational autoencoder models, namely vEEGNet-ver3 and hvEEGNet, to target the problem of high-fidelity EEG reconstruction.
arXiv Detail & Related papers (2023-11-20T15:36:31Z) - vEEGNet: learning latent representations to reconstruct EEG raw data via
variational autoencoders [3.031375888004876]
We propose vEEGNet, a DL architecture with two modules, i.e., an unsupervised module based on variational autoencoders to extract a latent representation of the data, and a supervised module based on a feed-forward neural network to classify different movements.
We show state-of-the-art classification performance, and the ability to reconstruct both low-frequency and middle-range components of the raw EEG.
arXiv Detail & Related papers (2023-11-16T19:24:40Z) - Graph Convolutional Network with Connectivity Uncertainty for EEG-based
Emotion Recognition [20.655367200006076]
This study introduces the distribution-based uncertainty method to represent spatial dependencies and temporal-spectral relativeness in EEG signals.
The graph mixup technique is employed to enhance latent connected edges and mitigate noisy label issues.
We evaluate our approach on two widely used datasets, namely SEED and SEEDIV, for emotion recognition tasks.
arXiv Detail & Related papers (2023-10-22T03:47:11Z) - A Knowledge-Driven Cross-view Contrastive Learning for EEG
Representation [48.85731427874065]
This paper proposes a knowledge-driven cross-view contrastive learning framework (KDC2) to extract effective representations from EEG with limited labels.
The KDC2 method creates scalp and neural views of EEG signals, simulating the internal and external representation of brain activity.
By modeling prior neural knowledge based on neural information consistency theory, the proposed method extracts invariant and complementary neural knowledge to generate combined representations.
arXiv Detail & Related papers (2023-09-21T08:53:51Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - MAtt: A Manifold Attention Network for EEG Decoding [0.966840768820136]
We propose a novel geometric learning (GDL)-based model for EEG decoding, featuring a manifold attention network (mAtt)
The evaluation of MAtt on both time-synchronous and -asyncronous EEG datasets suggests its superiority over other leading DL methods for general EEG decoding.
arXiv Detail & Related papers (2022-10-05T02:26:31Z) - Dynamic Graph Modeling of Simultaneous EEG and Eye-tracking Data for
Reading Task Identification [79.41619843969347]
We present a new approach, that we call AdaGTCN, for identifying human reader intent from Electroencephalogram(EEG) and Eye movement(EM) data.
Our method, Adaptive Graph Temporal Convolution Network (AdaGTCN), uses an Adaptive Graph Learning Layer and Deep Neighborhood Graph Convolution Layer.
We compare our approach with several baselines to report an improvement of 6.29% on the ZuCo 2.0 dataset, along with extensive ablation experiments.
arXiv Detail & Related papers (2021-02-21T18:19:49Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - GCNs-Net: A Graph Convolutional Neural Network Approach for Decoding
Time-resolved EEG Motor Imagery Signals [8.19994663278877]
A novel deep learning framework based on the graph convolutional neural networks (GCNs) is presented to enhance the decoding performance of raw EEG signals.
The introduced approach has been shown to converge for both personalized and group-wise predictions.
arXiv Detail & Related papers (2020-06-16T04:57:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.