Self-supervised Contrastive Learning for EEG-based Sleep Staging
- URL: http://arxiv.org/abs/2109.07839v1
- Date: Thu, 16 Sep 2021 10:05:33 GMT
- Title: Self-supervised Contrastive Learning for EEG-based Sleep Staging
- Authors: Xue Jiang, Jianhui Zhao, Bo Du, Zhiyong Yuan
- Abstract summary: We propose a self-supervised contrastive learning method of EEG signals for sleep stage classification.
In detail, the network's performance depends on the choice of transformations and the amount of unlabeled data used in the training process.
- Score: 29.897104001988748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: EEG signals are usually simple to obtain but expensive to label. Although
supervised learning has been widely used in the field of EEG signal analysis,
its generalization performance is limited by the amount of annotated data.
Self-supervised learning (SSL), as a popular learning paradigm in computer
vision (CV) and natural language processing (NLP), can employ unlabeled data to
make up for the data shortage of supervised learning. In this paper, we propose
a self-supervised contrastive learning method of EEG signals for sleep stage
classification. During the training process, we set up a pretext task for the
network in order to match the right transformation pairs generated from EEG
signals. In this way, the network improves the representation ability by
learning the general features of EEG signals. The robustness of the network
also gets improved in dealing with diverse data, that is, extracting constant
features from changing data. In detail, the network's performance depends on
the choice of transformations and the amount of unlabeled data used in the
training process of self-supervised learning. Empirical evaluations on the
Sleep-edf dataset demonstrate the competitive performance of our method on
sleep staging (88.16% accuracy and 81.96% F1 score) and verify the
effectiveness of SSL strategy for EEG signal analysis in limited labeled data
regimes. All codes are provided publicly online.
Related papers
- Context-Aware Predictive Coding: A Representation Learning Framework for WiFi Sensing [0.0]
WiFi sensing is an emerging technology that utilizes wireless signals for various sensing applications.
In this paper, we introduce a novel SSL framework called Context-Aware Predictive Coding (CAPC)
CAPC effectively learns from unlabelled data and adapts to diverse environments.
Our evaluations demonstrate that CAPC not only outperforms other SSL methods and supervised approaches, but also achieves superior generalization capabilities.
arXiv Detail & Related papers (2024-09-16T17:59:49Z) - Physics-informed and Unsupervised Riemannian Domain Adaptation for Machine Learning on Heterogeneous EEG Datasets [53.367212596352324]
We propose an unsupervised approach leveraging EEG signal physics.
We map EEG channels to fixed positions using field, source-free domain adaptation.
Our method demonstrates robust performance in brain-computer interface (BCI) tasks and potential biomarker applications.
arXiv Detail & Related papers (2024-03-07T16:17:33Z) - Convolutional Monge Mapping Normalization for learning on sleep data [63.22081662149488]
We propose a new method called Convolutional Monge Mapping Normalization (CMMN)
CMMN consists in filtering the signals in order to adapt their power spectrum density (PSD) to a Wasserstein barycenter estimated on training data.
Numerical experiments on sleep EEG data show that CMMN leads to significant and consistent performance gains independent from the neural network architecture.
arXiv Detail & Related papers (2023-05-30T08:24:01Z) - Self-Supervised PPG Representation Learning Shows High Inter-Subject
Variability [3.8036939971290007]
We propose a Self-Supervised Learning (SSL) method with a pretext task of signal reconstruction to learn an informative generalized PPG representation.
Results show that in a very limited label data setting (10 samples per class or less), using SSL is beneficial.
SSL may pave the way for the broader use of machine learning models on PPG data in label-scarce regimes.
arXiv Detail & Related papers (2022-12-07T19:02:45Z) - Does Decentralized Learning with Non-IID Unlabeled Data Benefit from
Self Supervision? [51.00034621304361]
We study decentralized learning with unlabeled data through the lens of self-supervised learning (SSL)
We study the effectiveness of contrastive learning algorithms under decentralized learning settings.
arXiv Detail & Related papers (2022-10-20T01:32:41Z) - Self-supervised EEG Representation Learning for Automatic Sleep Staging [26.560516415840965]
We propose a self-supervised model, named Contrast with the World Representation (ContraWR), for EEG signal representation learning.
ContraWR is evaluated on three real-world EEG datasets that include both at-home and in-lab recording settings.
ContraWR beats supervised learning when fewer training labels are available.
arXiv Detail & Related papers (2021-10-27T04:17:27Z) - Improved Speech Emotion Recognition using Transfer Learning and
Spectrogram Augmentation [56.264157127549446]
Speech emotion recognition (SER) is a challenging task that plays a crucial role in natural human-computer interaction.
One of the main challenges in SER is data scarcity.
We propose a transfer learning strategy combined with spectrogram augmentation.
arXiv Detail & Related papers (2021-08-05T10:39:39Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z) - Federated Self-Supervised Learning of Multi-Sensor Representations for
Embedded Intelligence [8.110949636804772]
Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models.
We propose a self-supervised approach termed textitscalogram-signal correspondence learning based on wavelet transform to learn useful representations from unlabeled sensor inputs.
We extensively assess the quality of learned features with our multi-view strategy on diverse public datasets, achieving strong performance in all domains.
arXiv Detail & Related papers (2020-07-25T21:59:17Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.