Adversarial Domain Adaptation with Self-Training for EEG-based Sleep
Stage Classification
- URL: http://arxiv.org/abs/2107.04470v1
- Date: Fri, 9 Jul 2021 14:56:12 GMT
- Title: Adversarial Domain Adaptation with Self-Training for EEG-based Sleep
Stage Classification
- Authors: Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong
Kwoh, Xiaoli Li, and Cuntai Guan
- Abstract summary: We propose a novel adversarial learning framework to tackle the domain shift problem in the unlabeled target domain.
First, we develop unshared attention mechanisms to preserve the domain-specific features in the source and target domains.
Second, we design a self-training strategy to align the fine-grained distributions class for the source and target domains via target domain pseudo labels.
- Score: 13.986662296156013
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sleep staging is of great importance in the diagnosis and treatment of sleep
disorders. Recently, numerous data driven deep learning models have been
proposed for automatic sleep staging. They mainly rely on the assumption that
training and testing data are drawn from the same distribution which may not
hold in real-world scenarios. Unsupervised domain adaption (UDA) has been
recently developed to handle this domain shift problem. However, previous UDA
methods applied for sleep staging has two main limitations. First, they rely on
a totally shared model for the domain alignment, which may lose the
domain-specific information during feature extraction. Second, they only align
the source and target distributions globally without considering the class
information in the target domain, which hinders the classification performance
of the model. In this work, we propose a novel adversarial learning framework
to tackle the domain shift problem in the unlabeled target domain. First, we
develop unshared attention mechanisms to preserve the domain-specific features
in the source and target domains. Second, we design a self-training strategy to
align the fine-grained class distributions for the source and target domains
via target domain pseudo labels. We also propose dual distinct classifiers to
increase the robustness and quality of the pseudo labels. The experimental
results on six cross-domain scenarios validate the efficacy of our proposed
framework for sleep staging and its advantage over state-of-the-art UDA
methods.
Related papers
- Unsupervised Domain Adaptation for Anatomical Landmark Detection [5.070344284426738]
We propose a novel framework for anatomical landmark detection under the setting of unsupervised domain adaptation (UDA)
The framework leverages self-training and domain adversarial learning to address the domain gap during adaptation.
Our experiments on cephalometric and lung landmark detection show the effectiveness of the method, which reduces the domain gap by a large margin and outperforms other UDA methods consistently.
arXiv Detail & Related papers (2023-08-25T10:22:13Z) - Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Joint Distribution Alignment via Adversarial Learning for Domain
Adaptive Object Detection [11.262560426527818]
Unsupervised domain adaptive object detection aims to adapt a well-trained detector from its original source domain with rich labeled data to a new target domain with unlabeled data.
Recently, mainstream approaches perform this task through adversarial learning, yet still suffer from two limitations.
We propose a joint adaptive detection framework (JADF) to address the above challenges.
arXiv Detail & Related papers (2021-09-19T00:27:08Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z) - Missing-Class-Robust Domain Adaptation by Unilateral Alignment for Fault
Diagnosis [3.786700931138978]
Domain adaptation aims at improving model performance by leveraging the learned knowledge in the source domain and transferring it to the target domain.
Recently, domain adversarial methods have been particularly successful in alleviating the distribution shift between the source and the target domains.
We demonstrate in this paper that the performance of domain adversarial methods can be vulnerable to an incomplete target label space during training.
arXiv Detail & Related papers (2020-01-07T13:19:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.