Partial Label Learning for Emotion Recognition from EEG
- URL: http://arxiv.org/abs/2302.13170v1
- Date: Sat, 25 Feb 2023 21:36:39 GMT
- Title: Partial Label Learning for Emotion Recognition from EEG
- Authors: Guangyi Zhang and Ali Etemad
- Abstract summary: We adapt and re-implement six state-of-the-art approaches for emotion recognition from EEG.
We evaluate the performance of all methods in classical and real-world experiments.
- Score: 23.40229188549055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fully supervised learning has recently achieved promising performance in
various electroencephalography (EEG) learning tasks by training on large
datasets with ground truth labels. However, labeling EEG data for affective
experiments is challenging, as it can be difficult for participants to
accurately distinguish between similar emotions, resulting in ambiguous
labeling (reporting multiple emotions for one EEG instance). This notion could
cause model performance degradation, as the ground truth is hidden within
multiple candidate labels. To address this issue, Partial Label Learning (PLL)
has been proposed to identify the ground truth from candidate labels during the
training phase, and has shown good performance in the computer vision domain.
However, PLL methods have not yet been adopted for EEG representation learning
or implemented for emotion recognition tasks. In this paper, we adapt and
re-implement six state-of-the-art PLL approaches for emotion recognition from
EEG on a large emotion dataset (SEED-V, containing five emotion classes). We
evaluate the performance of all methods in classical and real-world
experiments. The results show that PLL methods can achieve strong results in
affective computing from EEG and achieve comparable performance to fully
supervised learning. We also investigate the effect of label disambiguation, a
key step in many PLL methods. The results show that in most cases, label
disambiguation would benefit the model when the candidate labels are generated
based on their similarities to the ground truth rather than obeying a uniform
distribution. This finding suggests the potential of using label
disambiguation-based PLL methods for real-world affective tasks. We make the
source code of this paper publicly available at:
https://github.com/guangyizhangbci/PLL-Emotion-EEG.
Related papers
- Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition [2.1645626994550664]
We propose a novel Joint Contrastive learning framework with Feature Alignment to address cross-corpus EEG-based emotion recognition.
In the pre-training stage, a joint domain contrastive learning strategy is introduced to characterize generalizable time-frequency representations of EEG signals.
In the fine-tuning stage, JCFA is refined in conjunction with downstream tasks, where the structural connections among brain electrodes are considered.
arXiv Detail & Related papers (2024-04-15T08:21:17Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - Complementary Classifier Induced Partial Label Learning [54.61668156386079]
In partial label learning (PLL), each training sample is associated with a set of candidate labels, among which only one is valid.
In disambiguation, the existing works usually do not fully investigate the effectiveness of the non-candidate label set.
In this paper, we use the non-candidate labels to induce a complementary classifier, which naturally forms an adversarial relationship against the traditional classifier.
arXiv Detail & Related papers (2023-05-17T02:13:23Z) - Towards Effective Visual Representations for Partial-Label Learning [49.91355691337053]
Under partial-label learning (PLL), for each training instance, only a set of ambiguous labels containing the unknown true label is accessible.
Without access to true labels, positive points are predicted using pseudo-labels that are inherently noisy, and negative points often require large batches or momentum encoders.
In this paper, we rethink a state-of-the-artive contrastive method PiCO[PiPi24], which demonstrates significant scope for improvement in representation learning.
arXiv Detail & Related papers (2023-05-10T12:01:11Z) - EEGMatch: Learning with Incomplete Labels for Semi-Supervised EEG-based Cross-Subject Emotion Recognition [7.1695247553867345]
We propose a novel semi-supervised learning framework (EEGMatch) to leverage both labeled and unlabeled EEG data.
Extensive experiments are conducted on two benchmark databases (SEED and SEED-IV)
arXiv Detail & Related papers (2023-03-27T12:02:33Z) - Unifying the Discrete and Continuous Emotion labels for Speech Emotion
Recognition [28.881092401807894]
In paralinguistic analysis for emotion detection from speech, emotions have been identified with discrete or dimensional (continuous-valued) labels.
We propose a model to jointly predict continuous and discrete emotional attributes.
arXiv Detail & Related papers (2022-10-29T16:12:31Z) - Multimodal Emotion Recognition with Modality-Pairwise Unsupervised
Contrastive Loss [80.79641247882012]
We focus on unsupervised feature learning for Multimodal Emotion Recognition (MER)
We consider discrete emotions, and as modalities text, audio and vision are used.
Our method, as being based on contrastive loss between pairwise modalities, is the first attempt in MER literature.
arXiv Detail & Related papers (2022-07-23T10:11:24Z) - PARSE: Pairwise Alignment of Representations in Semi-Supervised EEG
Learning for Emotion Recognition [23.40229188549055]
We propose PARSE, a novel semi-supervised architecture for learning strong EEG representations for emotion recognition.
To reduce the potential distribution mismatch between the large amounts of unlabeled data and the limited amount of labeled data, PARSE uses pairwise representation alignment.
arXiv Detail & Related papers (2022-02-11T01:10:17Z) - Holistic Semi-Supervised Approaches for EEG Representation Learning [14.67085109524245]
We adapt three holistic semi-supervised approaches, namely MixMatch, FixMatch, and AdaMatch, as well as five classical semi-supervised methods for EEG learning.
Experiments with different amounts of limited labeled samples show that the holistic approaches achieve strong results even when only 1 labeled sample is used per class.
arXiv Detail & Related papers (2021-09-24T03:58:13Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.