A novel dual-stream time-frequency contrastive pretext tasks framework
for sleep stage classification
- URL: http://arxiv.org/abs/2312.09623v1
- Date: Fri, 15 Dec 2023 09:05:06 GMT
- Title: A novel dual-stream time-frequency contrastive pretext tasks framework
for sleep stage classification
- Authors: Sergio Kazatzidis, Siamak Mehrkanoon
- Abstract summary: This study introduces a dual-stream pretext task architecture that operates both in the time and frequency domains.
We have examined the incorporation of the novel Frequency Similarity (FS) pretext task into two existing pretext tasks, Relative Positioning (RP) and Temporal Shuffling (TS)
The inclusion of FS resulted in a notable improvement in downstream task accuracy, with a 1.28 percent improvement on RP and a 2.02 percent improvement on TS.
- Score: 1.9399172852087767
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-supervised learning addresses the challenge encountered by many
supervised methods, i.e. the requirement of large amounts of annotated data.
This challenge is particularly pronounced in fields such as the
electroencephalography (EEG) research domain. Self-supervised learning operates
instead by utilizing pseudo-labels, which are generated by pretext tasks, to
obtain a rich and meaningful data representation. In this study, we aim at
introducing a dual-stream pretext task architecture that operates both in the
time and frequency domains. In particular, we have examined the incorporation
of the novel Frequency Similarity (FS) pretext task into two existing pretext
tasks, Relative Positioning (RP) and Temporal Shuffling (TS). We assess the
accuracy of these models using the Physionet Challenge 2018 (PC18) dataset in
the context of the downstream task sleep stage classification. The inclusion of
FS resulted in a notable improvement in downstream task accuracy, with a 1.28
percent improvement on RP and a 2.02 percent improvement on TS. Furthermore,
when visualizing the learned embeddings using Uniform Manifold Approximation
and Projection (UMAP), distinct clusters emerge, indicating that the learned
representations carry meaningful information.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Prompt-Based Spatio-Temporal Graph Transfer Learning [22.855189872649376]
We propose a prompt-based framework capable of adapting to multi-diverse tasks in a data-scarce domain.
We employ learnable prompts to achieve domain and task transfer in a two-stage pipeline.
Our experiments demonstrate that STGP outperforms state-of-the-art baselines in three tasks-forecasting, kriging, and extrapolation-achieving an improvement of up to 10.7%.
arXiv Detail & Related papers (2024-05-21T02:06:40Z) - Large Language Model Guided Knowledge Distillation for Time Series
Anomaly Detection [12.585365177675607]
AnomalyLLM demonstrates state-of-the-art performance on 15 datasets, improving accuracy by at least 14.5% in the UCR dataset.
arXiv Detail & Related papers (2024-01-26T09:51:07Z) - Spatio-Temporal Contrastive Self-Supervised Learning for POI-level Crowd
Flow Inference [23.8192952068949]
We present a novel Contrastive Self-learning framework for S-temporal data (CSST)
Our approach initiates with the construction of a spatial adjacency graph founded on the Points of Interest (POIs) and their respective distances.
We adopt a swapped prediction approach to anticipate the representation of the target subgraph from similar instances.
Our experiments, conducted on two real-world datasets, demonstrate that the CSST pre-trained on extensive noisy data consistently outperforms models trained from scratch.
arXiv Detail & Related papers (2023-09-06T02:51:24Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Learning Invariant Representations across Domains and Tasks [81.30046935430791]
We propose a novel Task Adaptation Network (TAN) to solve this unsupervised task transfer problem.
In addition to learning transferable features via domain-adversarial training, we propose a novel task semantic adaptor that uses the learning-to-learn strategy to adapt the task semantics.
TAN significantly increases the recall and F1 score by 5.0% and 7.8% compared to recently strong baselines.
arXiv Detail & Related papers (2021-03-03T11:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.