C-SFDA: A Curriculum Learning Aided Self-Training Framework for
Efficient Source Free Domain Adaptation
- URL: http://arxiv.org/abs/2303.17132v1
- Date: Thu, 30 Mar 2023 03:42:54 GMT
- Title: C-SFDA: A Curriculum Learning Aided Self-Training Framework for
Efficient Source Free Domain Adaptation
- Authors: Nazmul Karim, Niluthpol Chowdhury Mithun, Abhinav Rajvanshi, Han-pang
Chiu, Supun Samarasekera, Nazanin Rahnavard
- Abstract summary: Unsupervised domain adaptation (UDA) approaches focus on adapting models trained on a labeled source domain to an unlabeled target domain.
We propose C-SFDA, a curriculum learning aided self-training framework for SFDA that adapts efficiently and reliably to changes across domains based on selective pseudo-labeling.
- Score: 18.58469201869616
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised domain adaptation (UDA) approaches focus on adapting models
trained on a labeled source domain to an unlabeled target domain. UDA methods
have a strong assumption that the source data is accessible during adaptation,
which may not be feasible in many real-world scenarios due to privacy concerns
and resource constraints of devices. In this regard, source-free domain
adaptation (SFDA) excels as access to source data is no longer required during
adaptation. Recent state-of-the-art (SOTA) methods on SFDA mostly focus on
pseudo-label refinement based self-training which generally suffers from two
issues: i) inevitable occurrence of noisy pseudo-labels that could lead to
early training time memorization, ii) refinement process requires maintaining a
memory bank which creates a significant burden in resource constraint
scenarios. To address these concerns, we propose C-SFDA, a curriculum learning
aided self-training framework for SFDA that adapts efficiently and reliably to
changes across domains based on selective pseudo-labeling. Specifically, we
employ a curriculum learning scheme to promote learning from a restricted
amount of pseudo labels selected based on their reliabilities. This simple yet
effective step successfully prevents label noise propagation during different
stages of adaptation and eliminates the need for costly memory-bank based label
refinement. Our extensive experimental evaluations on both image recognition
and semantic segmentation tasks confirm the effectiveness of our method. C-SFDA
is readily applicable to online test-time domain adaptation and also
outperforms previous SOTA methods in this task.
Related papers
- Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum Learning [5.599218556731767]
Source-Free Domain Adaptation (SFDA) aims to adapt a pre-trained source model to a target domain using only unlabeled target data.
Reliability-based Curriculum Learning (RCL) integrates multiple MLLMs for knowledge exploitation via pseudo-labeling in SFDA.
arXiv Detail & Related papers (2024-05-28T17:18:17Z) - De-Confusing Pseudo-Labels in Source-Free Domain Adaptation [14.954662088592762]
Source-free domain adaptation aims to adapt a source-trained model to an unlabeled target domain without access to the source data.
We introduce a novel noise-learning approach tailored to address noise distribution in domain adaptation settings.
arXiv Detail & Related papers (2024-01-03T10:07:11Z) - Robust Source-Free Domain Adaptation for Fundus Image Segmentation [3.585032903685044]
Unlabelled Domain Adaptation (UDA) is a learning technique that transfers knowledge learned in the source domain from labelled data to the target domain with only unlabelled data.
In this study, we propose a two-stage training stage for robust domain adaptation.
We propose a novel robust pseudo-label and pseudo-boundary (PLPB) method, which effectively utilizes unlabeled target data to generate pseudo labels and pseudo boundaries.
arXiv Detail & Related papers (2023-10-25T14:25:18Z) - Prior Knowledge Guided Unsupervised Domain Adaptation [82.9977759320565]
We propose a Knowledge-guided Unsupervised Domain Adaptation (KUDA) setting where prior knowledge about the target class distribution is available.
In particular, we consider two specific types of prior knowledge about the class distribution in the target domain: Unary Bound and Binary Relationship.
We propose a rectification module that uses such prior knowledge to refine model generated pseudo labels.
arXiv Detail & Related papers (2022-07-18T18:41:36Z) - Boosting Cross-Domain Speech Recognition with Self-Supervision [35.01508881708751]
Cross-domain performance of automatic speech recognition (ASR) could be severely hampered due to mismatch between training and testing distributions.
Previous work has shown that self-supervised learning (SSL) or pseudo-labeling (PL) is effective in UDA by exploiting the self-supervisions of unlabeled data.
This work presents a systematic UDA framework to fully utilize the unlabeled data with self-supervision in the pre-training and fine-tuning paradigm.
arXiv Detail & Related papers (2022-06-20T14:02:53Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - A Curriculum-style Self-training Approach for Source-Free Semantic Segmentation [91.13472029666312]
We propose a curriculum-style self-training approach for source-free domain adaptive semantic segmentation.
Our method yields state-of-the-art performance on source-free semantic segmentation tasks for both synthetic-to-real and adverse conditions.
arXiv Detail & Related papers (2021-06-22T10:21:39Z) - Adaptive Pseudo-Label Refinement by Negative Ensemble Learning for
Source-Free Unsupervised Domain Adaptation [35.728603077621564]
Existing Unsupervised Domain Adaptation (UDA) methods presumes source and target domain data to be simultaneously available during training.
A pre-trained source model is always considered to be available, even though performing poorly on target due to the well-known domain shift problem.
We propose a unified method to tackle adaptive noise filtering and pseudo-label refinement.
arXiv Detail & Related papers (2021-03-29T22:18:34Z) - Self-Supervised Noisy Label Learning for Source-Free Unsupervised Domain
Adaptation [87.60688582088194]
We propose a novel Self-Supervised Noisy Label Learning method.
Our method can easily achieve state-of-the-art results and surpass other methods by a very large margin.
arXiv Detail & Related papers (2021-02-23T10:51:45Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - Universal Source-Free Domain Adaptation [57.37520645827318]
We propose a novel two-stage learning process for domain adaptation.
In the Procurement stage, we aim to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift.
In the Deployment stage, the goal is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps.
arXiv Detail & Related papers (2020-04-09T07:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.