FreqDGT: Frequency-Adaptive Dynamic Graph Networks with Transformer for Cross-subject EEG Emotion Recognition
- URL: http://arxiv.org/abs/2506.22807v2
- Date: Tue, 01 Jul 2025 02:04:06 GMT
- Title: FreqDGT: Frequency-Adaptive Dynamic Graph Networks with Transformer for Cross-subject EEG Emotion Recognition
- Authors: Yueyang Li, Shengyu Gong, Weiming Zeng, Nizhuan Wang, Wai Ting Siok,
- Abstract summary: Cross-subject generalization is a challenge due to individual variability, cognitive traits, and emotional responses.<n>We propose FreqDGT, a frequency-adaptive dynamic graph transformer that addresses these limitations through an integrated framework.<n>FreqDGT significantly improves cross-subject emotion recognition accuracy, confirming the effectiveness of integrating frequency-adaptive, spatial-dynamic, and temporal-hierarchical modeling.
- Score: 1.9198890060313585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Electroencephalography (EEG) serves as a reliable and objective signal for emotion recognition in affective brain-computer interfaces, offering unique advantages through its high temporal resolution and ability to capture authentic emotional states that cannot be consciously controlled. However, cross-subject generalization remains a fundamental challenge due to individual variability, cognitive traits, and emotional responses. We propose FreqDGT, a frequency-adaptive dynamic graph transformer that systematically addresses these limitations through an integrated framework. FreqDGT introduces frequency-adaptive processing (FAP) to dynamically weight emotion-relevant frequency bands based on neuroscientific evidence, employs adaptive dynamic graph learning (ADGL) to learn input-specific brain connectivity patterns, and implements multi-scale temporal disentanglement network (MTDN) that combines hierarchical temporal transformers with adversarial feature disentanglement to capture both temporal dynamics and ensure cross-subject robustness. Comprehensive experiments demonstrate that FreqDGT significantly improves cross-subject emotion recognition accuracy, confirming the effectiveness of integrating frequency-adaptive, spatial-dynamic, and temporal-hierarchical modeling while ensuring robustness to individual differences. The code is available at https://github.com/NZWANG/FreqDGT.
Related papers
- Fractional Spike Differential Equations Neural Network with Efficient Adjoint Parameters Training [63.3991315762955]
Spiking Neural Networks (SNNs) draw inspiration from biological neurons to create realistic models for brain-like computation.<n>Most existing SNNs assume a single time constant for neuronal membrane voltage dynamics, modeled by first-order ordinary differential equations (ODEs) with Markovian characteristics.<n>We propose the Fractional SPIKE Differential Equation neural network (fspikeDE), which captures long-term dependencies in membrane voltage and spike trains through fractional-order dynamics.
arXiv Detail & Related papers (2025-07-22T18:20:56Z) - Neuromorphic Wireless Split Computing with Resonate-and-Fire Neurons [69.73249913506042]
This paper investigates a wireless split computing architecture that employs resonate-and-fire (RF) neurons to process time-domain signals directly.<n>By resonating at tunable frequencies, RF neurons extract time-localized spectral features while maintaining low spiking activity.<n> Experimental results show that the proposed RF-SNN architecture achieves comparable accuracy to conventional LIF-SNNs and ANNs.
arXiv Detail & Related papers (2025-06-24T21:14:59Z) - PhysioSync: Temporal and Cross-Modal Contrastive Learning Inspired by Physiological Synchronization for EEG-Based Emotion Recognition [26.384133051131133]
We propose PhysioSync, a novel pre-training framework leveraging temporal and cross-modal contrastive learning.<n>After pre-training, cross-resolution and cross-modal features are hierarchically fused and fine-tuned to enhance emotion recognition.<n> Experiments on DEAP and DREAMER datasets demonstrate PhysioSync's advanced performance under uni-modal and cross-modal conditions.
arXiv Detail & Related papers (2025-04-24T00:48:03Z) - Dynamic Graph Neural ODE Network for Multi-modal Emotion Recognition in Conversation [14.158939954453933]
We propose a Dynamic Graph Neural Ordinary Differential Equation Network (DGODE) for Multimodal emotion recognition in conversation (MERC)<n>The proposed DGODE combines the dynamic changes of emotions to capture the temporal dependency of speakers' emotions.<n>Experiments on two publicly available multimodal emotion recognition datasets demonstrate that the proposed DGODE model has superior performance compared to various baselines.
arXiv Detail & Related papers (2024-12-04T01:07:59Z) - Hybrid Quantum Deep Learning Model for Emotion Detection using raw EEG Signal Analysis [0.0]
This work presents a hybrid quantum deep learning technique for emotion recognition.<n>Conventional EEG-based emotion recognition techniques are limited by noise and high-dimensional data complexity.<n>The model will be extended for real-time applications and multi-class categorization in future study.
arXiv Detail & Related papers (2024-11-19T17:44:04Z) - MVGT: A Multi-view Graph Transformer Based on Spatial Relations for EEG Emotion Recognition [4.184462746475896]
We introduce a multi-view graph transformer (MVGT) based on spatial relations that integrates information across three domains.<n> evaluation on publicly available datasets demonstrates that MVGT surpasses state-of-the-art methods in performance.
arXiv Detail & Related papers (2024-07-03T14:13:00Z) - EEG-Deformer: A Dense Convolutional Transformer for Brain-computer Interfaces [17.524441950422627]
We introduce EEG-Deformer, which incorporates two main novel components into a CNN-Transformer.
EEG-Deformer learns from neurophysiologically meaningful brain regions for the corresponding cognitive tasks.
arXiv Detail & Related papers (2024-04-25T18:00:46Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Improved Speech Emotion Recognition using Transfer Learning and
Spectrogram Augmentation [56.264157127549446]
Speech emotion recognition (SER) is a challenging task that plays a crucial role in natural human-computer interaction.
One of the main challenges in SER is data scarcity.
We propose a transfer learning strategy combined with spectrogram augmentation.
arXiv Detail & Related papers (2021-08-05T10:39:39Z) - F-FADE: Frequency Factorization for Anomaly Detection in Edge Streams [53.70940420595329]
We propose F-FADE, a new approach for detection of anomalies in edge streams.
It uses a novel frequency-factorization technique to efficiently model the time-evolving distributions of frequencies of interactions between node-pairs.
F-FADE is able to handle in an online streaming setting a broad variety of anomalies with temporal and structural changes, while requiring only constant memory.
arXiv Detail & Related papers (2020-11-09T19:55:40Z) - Video-based Remote Physiological Measurement via Cross-verified Feature
Disentangling [121.50704279659253]
We propose a cross-verified feature disentangling strategy to disentangle the physiological features with non-physiological representations.
We then use the distilled physiological features for robust multi-task physiological measurements.
The disentangled features are finally used for the joint prediction of multiple physiological signals like average HR values and r signals.
arXiv Detail & Related papers (2020-07-16T09:39:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.