A Lightweight Domain Adversarial Neural Network Based on Knowledge
Distillation for EEG-based Cross-subject Emotion Recognition
- URL: http://arxiv.org/abs/2305.07446v1
- Date: Fri, 12 May 2023 13:05:12 GMT
- Title: A Lightweight Domain Adversarial Neural Network Based on Knowledge
Distillation for EEG-based Cross-subject Emotion Recognition
- Authors: Zhe Wang, Yongxiong Wang, Jiapeng Zhang, Yiheng Tang, Zhiqun Pan
- Abstract summary: Individual differences of Electroencephalogram (EEG) could cause the domain shift which would significantly degrade the performance of cross-subject strategy.
In this work, we propose knowledge distillation (KD) based lightweight DANN to enhance cross-subject EEG-based emotion recognition.
- Score: 8.9104681425275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Individual differences of Electroencephalogram (EEG) could cause the domain
shift which would significantly degrade the performance of cross-subject
strategy. The domain adversarial neural networks (DANN), where the
classification loss and domain loss jointly update the parameters of feature
extractor, are adopted to deal with the domain shift. However, limited EEG data
quantity and strong individual difference are challenges for the DANN with
cumbersome feature extractor. In this work, we propose knowledge distillation
(KD) based lightweight DANN to enhance cross-subject EEG-based emotion
recognition. Specifically, the teacher model with strong context learning
ability is utilized to learn complex temporal dynamics and spatial correlations
of EEG, and robust lightweight student model is guided by the teacher model to
learn more difficult domain-invariant features. In the feature-based KD
framework, a transformer-based hierarchical temporalspatial learning model is
served as the teacher model. The student model, which is composed of Bi-LSTM
units, is a lightweight version of the teacher model. Hence, the student model
could be supervised to mimic the robust feature representations of teacher
model by leveraging complementary latent temporal features and spatial
features. In the DANN-based cross-subject emotion recognition, we combine the
obtained student model and a lightweight temporal-spatial feature interaction
module as the feature extractor. And the feature aggregation is fed to the
emotion classifier and domain classifier for domain-invariant feature learning.
To verify the effectiveness of the proposed method, we conduct the
subject-independent experiments on the public dataset DEAP with arousal and
valence classification. The outstanding performance and t-SNE visualization of
latent features verify the advantage and effectiveness of the proposed method.
Related papers
- TAS: Distilling Arbitrary Teacher and Student via a Hybrid Assistant [52.0297393822012]
We introduce an assistant model as a bridge to facilitate smooth feature knowledge transfer between heterogeneous teachers and students.
Within our proposed design principle, the assistant model combines the advantages of cross-architecture inductive biases and module functions.
Our proposed method is evaluated across some homogeneous model pairs and arbitrary heterogeneous combinations of CNNs, ViTs, spatial KDs.
arXiv Detail & Related papers (2024-10-16T08:02:49Z) - PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings [55.55445978692678]
PseudoNeg-MAE is a self-supervised learning framework that enhances global feature representation of point cloud mask autoencoders.
We show that PseudoNeg-MAE achieves state-of-the-art performance on the ModelNet40 and ScanObjectNN datasets.
arXiv Detail & Related papers (2024-09-24T07:57:21Z) - Integrated Dynamic Phenological Feature for Remote Sensing Image Land Cover Change Detection [5.109855690325439]
We introduce the InPhea model, which integrates phenological features into a remote sensing image CD framework.
A constrainer with four constraint modules and a multi-stage contrastive learning approach is employed to aid in the model's understanding of phenological characteristics.
Experiments on the HRSCD, SECD, and PSCD-Wuhan datasets reveal that InPhea outperforms other models.
arXiv Detail & Related papers (2024-08-08T01:07:28Z) - Self-supervised Gait-based Emotion Representation Learning from Selective Strongly Augmented Skeleton Sequences [4.740624855896404]
We propose a contrastive learning framework utilizing selective strong augmentation for self-supervised gait-based emotion representation.
Our approach is validated on the Emotion-Gait (E-Gait) and Emilya datasets and outperforms the state-of-the-art methods under different evaluation protocols.
arXiv Detail & Related papers (2024-05-08T09:13:10Z) - Generative Model-based Feature Knowledge Distillation for Action
Recognition [11.31068233536815]
Our paper introduces an innovative knowledge distillation framework, with the generative model for training a lightweight student model.
The efficacy of our approach is demonstrated through comprehensive experiments on diverse popular datasets.
arXiv Detail & Related papers (2023-12-14T03:55:29Z) - Weakly Supervised Semantic Segmentation via Alternative Self-Dual
Teaching [82.71578668091914]
This paper establishes a compact learning framework that embeds the classification and mask-refinement components into a unified deep model.
We propose a novel alternative self-dual teaching (ASDT) mechanism to encourage high-quality knowledge interaction.
arXiv Detail & Related papers (2021-12-17T11:56:56Z) - Revisiting Knowledge Distillation: An Inheritance and Exploration
Framework [153.73692961660964]
Knowledge Distillation (KD) is a popular technique to transfer knowledge from a teacher model to a student model.
We propose a novel inheritance and exploration knowledge distillation framework (IE-KD)
Our IE-KD framework is generic and can be easily combined with existing distillation or mutual learning methods for training deep neural networks.
arXiv Detail & Related papers (2021-07-01T02:20:56Z) - Subject Independent Emotion Recognition using EEG Signals Employing
Attention Driven Neural Networks [2.76240219662896]
A novel deep learning framework capable of doing subject-independent emotion recognition is presented.
A convolutional neural network (CNN) with attention framework is presented for performing the task.
The proposed approach has been validated using publicly available datasets.
arXiv Detail & Related papers (2021-06-07T09:41:15Z) - Cross-individual Recognition of Emotions by a Dynamic Entropy based on
Pattern Learning with EEG features [2.863100352151122]
We propose a deep-learning framework denoted as a dynamic entropy-based pattern learning (DEPL) to abstract informative indicators pertaining to the neurophysiological features among multiple individuals.
DEPL enhanced the capability of representations generated by a deep convolutional neural network by modelling the interdependencies between the cortical locations of dynamical entropy based features.
arXiv Detail & Related papers (2020-09-26T07:22:07Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.