Self-supervised Learning for Label-Efficient Sleep Stage Classification:
A Comprehensive Evaluation
- URL: http://arxiv.org/abs/2210.06286v1
- Date: Mon, 10 Oct 2022 09:01:17 GMT
- Title: Self-supervised Learning for Label-Efficient Sleep Stage Classification:
A Comprehensive Evaluation
- Authors: Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong
Kwoh, and Xiaoli Li
- Abstract summary: Self-supervised learning (SSL) paradigm has shined as one of the most successful techniques to overcome the scarcity of labeled data.
In this paper, we evaluate the efficacy of SSL to boost the performance of existing SSC models in the few-labels regime.
We find that fine-tuning the pretrained SSC models with only 5% of labeled data can achieve competitive performance to the supervised training with full labels.
- Score: 13.895332825128076
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The past few years have witnessed a remarkable advance in deep learning for
EEG-based sleep stage classification (SSC). However, the success of these
models is attributed to possessing a massive amount of labeled data for
training, limiting their applicability in real-world scenarios. In such
scenarios, sleep labs can generate a massive amount of data, but labeling these
data can be expensive and time-consuming. Recently, the self-supervised
learning (SSL) paradigm has shined as one of the most successful techniques to
overcome the scarcity of labeled data. In this paper, we evaluate the efficacy
of SSL to boost the performance of existing SSC models in the few-labels
regime. We conduct a thorough study on three SSC datasets, and we find that
fine-tuning the pretrained SSC models with only 5% of labeled data can achieve
competitive performance to the supervised training with full labels. Moreover,
self-supervised pretraining helps SSC models to be more robust to data
imbalance and domain shift problems. The code is publicly available at
\url{https://github.com/emadeldeen24/eval_ssl_ssc}.
Related papers
- A Survey of the Self Supervised Learning Mechanisms for Vision Transformers [5.152455218955949]
The application of self supervised learning (SSL) in vision tasks has gained significant attention.
We develop a comprehensive taxonomy of systematically classifying the SSL techniques.
We discuss the motivations behind SSL, review popular pre-training tasks, and highlight the challenges and advancements in this field.
arXiv Detail & Related papers (2024-08-30T07:38:28Z) - A Self-Supervised Learning Pipeline for Demographically Fair Facial Attribute Classification [3.5092955099876266]
This paper proposes a fully self-supervised pipeline for demographically fair facial attribute classification.
We leverage completely unlabeled data pseudolabeled via pre-trained encoders, diverse data curation techniques, and meta-learning-based weighted contrastive learning.
arXiv Detail & Related papers (2024-07-14T07:11:57Z) - Incremental Self-training for Semi-supervised Learning [56.57057576885672]
IST is simple yet effective and fits existing self-training-based semi-supervised learning methods.
We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed.
arXiv Detail & Related papers (2024-04-14T05:02:00Z) - On Pretraining Data Diversity for Self-Supervised Learning [57.91495006862553]
We explore the impact of training with more diverse datasets on the performance of self-supervised learning (SSL) under a fixed computational budget.
Our findings consistently demonstrate that increasing pretraining data diversity enhances SSL performance, albeit only when the distribution distance to the downstream data is minimal.
arXiv Detail & Related papers (2024-03-20T17:59:58Z) - Evaluating Fairness in Self-supervised and Supervised Models for
Sequential Data [10.626503137418636]
Self-supervised learning (SSL) has become the de facto training paradigm of large models.
This study explores the impact of pre-training and fine-tuning strategies on fairness.
arXiv Detail & Related papers (2024-01-03T09:31:43Z) - Semi-Supervised Class-Agnostic Motion Prediction with Pseudo Label
Regeneration and BEVMix [59.55173022987071]
We study the potential of semi-supervised learning for class-agnostic motion prediction.
Our framework adopts a consistency-based self-training paradigm, enabling the model to learn from unlabeled data.
Our method exhibits comparable performance to weakly and some fully supervised methods.
arXiv Detail & Related papers (2023-12-13T09:32:50Z) - Semi-Supervised Learning in the Few-Shot Zero-Shot Scenario [14.916971861796384]
Semi-Supervised Learning (SSL) is a framework that utilizes both labeled and unlabeled data to enhance model performance.
We propose a general approach to augment existing SSL methods, enabling them to handle situations where certain classes are missing.
Our experimental results reveal significant improvements in accuracy when compared to state-of-the-art SSL, open-set SSL, and open-world SSL methods.
arXiv Detail & Related papers (2023-08-27T14:25:07Z) - DATA: Domain-Aware and Task-Aware Pre-training [94.62676913928831]
We present DATA, a simple yet effective NAS approach specialized for self-supervised learning (SSL)
Our method achieves promising results across a wide range of computation costs on downstream tasks, including image classification, object detection and semantic segmentation.
arXiv Detail & Related papers (2022-03-17T02:38:49Z) - Robust Deep Semi-Supervised Learning: A Brief Introduction [63.09703308309176]
Semi-supervised learning (SSL) aims to improve learning performance by leveraging unlabeled data when labels are insufficient.
SSL with deep models has proven to be successful on standard benchmark tasks.
However, they are still vulnerable to various robustness threats in real-world applications.
arXiv Detail & Related papers (2022-02-12T04:16:41Z) - Self-Tuning for Data-Efficient Deep Learning [75.34320911480008]
Self-Tuning is a novel approach to enable data-efficient deep learning.
It unifies the exploration of labeled and unlabeled data and the transfer of a pre-trained model.
It outperforms its SSL and TL counterparts on five tasks by sharp margins.
arXiv Detail & Related papers (2021-02-25T14:56:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.