PL-DCP: A Pairwise Learning framework with Domain and Class Prototypes for EEG emotion recognition under unseen target conditions
- URL: http://arxiv.org/abs/2412.00082v2
- Date: Thu, 07 Aug 2025 01:19:48 GMT
- Title: PL-DCP: A Pairwise Learning framework with Domain and Class Prototypes for EEG emotion recognition under unseen target conditions
- Authors: Guangli Li, Canbiao Wu, Zhehao Zhou, Tuo Sun, Ping Tan, Li Zhang, Zhen Liang,
- Abstract summary: We propose a Pairwise Learning framework with Domain and Category Prototypes for EEG emotion recognition under unseen target conditions.<n>The PL-DCP model achieves slightly better performance than the deep transfer learning method that requires both source and target data.<n>This work provides an effective and robust potential solution for emotion recognition.
- Score: 27.769583873372518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Electroencephalogram (EEG) signals serve as a powerful tool in affective Brain-Computer Interfaces (aBCIs) and play a crucial role in affective computing. In recent years, the introduction of deep learning techniques has significantly advanced the development of aBCIs. However, the current emotion recognition methods based on deep transfer learning face the challenge of the dual dependence of the model on source domain and target domain, As well as being affected by label noise, which seriously affects the performance and generalization ability of the model. To overcome this limitation, we proposes a Pairwise Learning framework with Domain and Category Prototypes for EEG emotion recognition under unseen target conditions (PL-DCP), and integrating concepts of feature disentanglement and prototype inference. Here, the feature disentanglement module extracts and decouples the emotional EEG features to form domain features and class features, and further calculates the dual prototype representation. The Domain-pprototype captures the individual variations across subjects, while the class-prototype captures the cross-individual commonality of emotion categories. In addition, the pairwise learning strategy effectively reduces the noise effect caused by wrong labels. The PL-DCP framework conducts a systematic experimental evaluation on the published datasets SEED, SEED-IV and SEED-V, and the accuracy are 82.88\%, 65.15\% and 61.29\%, respectively. The results show that compared with other State-of-the-Art(SOTA) Methods, the PL-DCP model still achieves slightly better performance than the deep transfer learning method that requires both source and target data, although the target domain is completely unseen during the training. This work provides an effective and robust potential solution for emotion recognition. The source code is available at https://github.com/WuCB-BCI/PL_DCP.
Related papers
- Towards Practical Emotion Recognition: An Unsupervised Source-Free Approach for EEG Domain Adaptation [0.5755004576310334]
We propose a novel SF-UDA approach for EEG-based emotion classification across domains.
We introduce Dual-Loss Adaptive Regularization (DLAR) to minimize prediction discrepancies and align predictions with expected pseudo-labels.
Our approach significantly outperforms state-of-the-art methods, achieving 65.84% accuracy when trained on DEAP and tested on SEED.
arXiv Detail & Related papers (2025-03-26T14:29:20Z) - M3D: Manifold-based Domain Adaptation with Dynamic Distribution for Non-Deep Transfer Learning in Cross-subject and Cross-session EEG-based Emotion Recognition [11.252832459891566]
We propose Manifold-based Domain Adaptation with Dynamic Distribution (M3D), a lightweight, non-deep transfer learning framework.
M3D consists of four key modules: manifold feature transformation, dynamic distribution alignment, classifier learning, and ensemble learning.
Experimental results show that M3D outperforms traditional non-deep learning methods with a 4.47% average improvement.
arXiv Detail & Related papers (2024-04-24T03:08:25Z) - Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition [2.1645626994550664]
We propose a novel Joint Contrastive learning framework with Feature Alignment to address cross-corpus EEG-based emotion recognition.
In the pre-training stage, a joint domain contrastive learning strategy is introduced to characterize generalizable time-frequency representations of EEG signals.
In the fine-tuning stage, JCFA is refined in conjunction with downstream tasks, where the structural connections among brain electrodes are considered.
arXiv Detail & Related papers (2024-04-15T08:21:17Z) - Learning Transferable Conceptual Prototypes for Interpretable
Unsupervised Domain Adaptation [79.22678026708134]
In this paper, we propose an inherently interpretable method, named Transferable Prototype Learning ( TCPL)
To achieve this goal, we design a hierarchically prototypical module that transfers categorical basic concepts from the source domain to the target domain and learns domain-shared prototypes for explaining the underlying reasoning process.
Comprehensive experiments show that the proposed method can not only provide effective and intuitive explanations but also outperform previous state-of-the-arts.
arXiv Detail & Related papers (2023-10-12T06:36:41Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Face Presentation Attack Detection by Excavating Causal Clues and
Adapting Embedding Statistics [9.612556145185431]
Face presentation attack detection (PAD) uses domain adaptation (DA) and domain generalization (DG) techniques to address performance degradation on unknown domains.
Most DG-based PAD solutions rely on a priori, i.e., known domain labels.
This paper proposes to model face PAD as a compound DG task from a causal perspective, linking it to model optimization.
arXiv Detail & Related papers (2023-08-28T13:11:05Z) - Semi-Supervised Dual-Stream Self-Attentive Adversarial Graph Contrastive Learning for Cross-Subject EEG-based Emotion Recognition [19.578050094283313]
The DS-AGC framework is proposed to tackle the challenge of limited labeled data in cross-subject EEG-based emotion recognition.
The proposed model outperforms existing methods under different incomplete label conditions.
arXiv Detail & Related papers (2023-08-13T23:54:40Z) - EEG-based Emotion Style Transfer Network for Cross-dataset Emotion
Recognition [45.26847258736848]
We propose an EEG-based Emotion Style Transfer Network (E2STN) to obtain EEG representations that contain the content information of source domain and the style information of target domain.
The E2STN can achieve the state-of-the-art performance on cross-dataset EEG emotion recognition tasks.
arXiv Detail & Related papers (2023-08-09T16:54:40Z) - A Lightweight Domain Adversarial Neural Network Based on Knowledge
Distillation for EEG-based Cross-subject Emotion Recognition [8.9104681425275]
Individual differences of Electroencephalogram (EEG) could cause the domain shift which would significantly degrade the performance of cross-subject strategy.
In this work, we propose knowledge distillation (KD) based lightweight DANN to enhance cross-subject EEG-based emotion recognition.
arXiv Detail & Related papers (2023-05-12T13:05:12Z) - EEGMatch: Learning with Incomplete Labels for Semi-Supervised EEG-based Cross-Subject Emotion Recognition [7.1695247553867345]
We propose a novel semi-supervised learning framework (EEGMatch) to leverage both labeled and unlabeled EEG data.
Extensive experiments are conducted on two benchmark databases (SEED and SEED-IV)
arXiv Detail & Related papers (2023-03-27T12:02:33Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - BMD: A General Class-balanced Multicentric Dynamic Prototype Strategy
for Source-free Domain Adaptation [74.93176783541332]
Source-free Domain Adaptation (SFDA) aims to adapt a pre-trained source model to the unlabeled target domain without accessing the well-labeled source data.
To make up for the absence of source data, most existing methods introduced feature prototype based pseudo-labeling strategies.
We propose a general class-Balanced Multicentric Dynamic prototype strategy for the SFDA task.
arXiv Detail & Related papers (2022-04-06T13:23:02Z) - The Overlooked Classifier in Human-Object Interaction Recognition [82.20671129356037]
We encode the semantic correlation among classes into the classification head by initializing the weights with language embeddings of HOIs.
We propose a new loss named LSE-Sign to enhance multi-label learning on a long-tailed dataset.
Our simple yet effective method enables detection-free HOI classification, outperforming the state-of-the-arts that require object detection and human pose by a clear margin.
arXiv Detail & Related papers (2022-03-10T23:35:00Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Source-free Domain Adaptation via Avatar Prototype Generation and
Adaptation [34.45208248728318]
We study a practical domain adaptation task in which we cannot access source domain data due to data privacy issues.
The lack of source data and target domain labels makes model adaptation very challenging.
We propose a Contrastive Prototype Generation and Adaptation (CPGA) method to exploit hidden knowledge in the source model.
arXiv Detail & Related papers (2021-06-18T08:30:54Z) - Emotional Semantics-Preserved and Feature-Aligned CycleGAN for Visual
Emotion Adaptation [85.20533077846606]
Unsupervised domain adaptation (UDA) studies the problem of transferring models trained on one labeled source domain to another unlabeled target domain.
In this paper, we focus on UDA in visual emotion analysis for both emotion distribution learning and dominant emotion classification.
We propose a novel end-to-end cycle-consistent adversarial model, termed CycleEmotionGAN++.
arXiv Detail & Related papers (2020-11-25T01:31:01Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Adversarial Bipartite Graph Learning for Video Domain Adaptation [50.68420708387015]
Domain adaptation techniques, which focus on adapting models between distributionally different domains, are rarely explored in the video recognition area.
Recent works on visual domain adaptation which leverage adversarial learning to unify the source and target video representations are not highly effective on the videos.
This paper proposes an Adversarial Bipartite Graph (ABG) learning framework which directly models the source-target interactions.
arXiv Detail & Related papers (2020-07-31T03:48:41Z) - Few-Shot Relation Learning with Attention for EEG-based Motor Imagery
Classification [11.873435088539459]
Brain-Computer Interfaces (BCI) based on Electroencephalography (EEG) signals have received a lot of attention.
Motor imagery (MI) data can be used to aid rehabilitation as well as in autonomous driving scenarios.
classification of MI signals is vital for EEG-based BCI systems.
arXiv Detail & Related papers (2020-03-03T02:34:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.