CNG-SFDA:Clean-and-Noisy Region Guided Online-Offline Source-Free Domain Adaptation
- URL: http://arxiv.org/abs/2401.14587v3
- Date: Tue, 15 Oct 2024 00:26:44 GMT
- Title: CNG-SFDA:Clean-and-Noisy Region Guided Online-Offline Source-Free Domain Adaptation
- Authors: Hyeonwoo Cho, Chanmin Park, Dong-Hee Kim, Jinyoung Kim, Won Hwa Kim,
- Abstract summary: Source-Free Domain Adaptation (SFDA) aims to adopt a trained model on the source domain to the target domain.
handling false labels in the target domain is crucial because they negatively impact the model performance.
We conduct extensive experiments on multiple datasets in online/offline SFDA settings, whose results demonstrate that our method, CNG-SFDA, achieves state-of-the-art for most cases.
- Score: 6.222371087167951
- License:
- Abstract: Domain shift occurs when training (source) and test (target) data diverge in their distribution. Source-Free Domain Adaptation (SFDA) addresses this domain shift problem, aiming to adopt a trained model on the source domain to the target domain in a scenario where only a well-trained source model and unlabeled target data are available. In this scenario, handling false labels in the target domain is crucial because they negatively impact the model performance. To deal with this problem, we propose to update cluster prototypes (i.e., centroid of each sample cluster) and their structure in the target domain formulated by the source model in online manners. In the feature space, samples in different regions have different pseudo-label distribution characteristics affected by the cluster prototypes, and we adopt distinct training strategies for these samples by defining clean and noisy regions: we selectively train the target with clean pseudo-labels in the clean region, whereas we introduce mix-up inputs representing intermediate features between clean and noisy regions to increase the compactness of the cluster. We conducted extensive experiments on multiple datasets in online/offline SFDA settings, whose results demonstrate that our method, CNG-SFDA, achieves state-of-the-art for most cases. Code is available at https://github.com/hyeonwoocho7/CNG-SFDA.
Related papers
- High-order Neighborhoods Know More: HyperGraph Learning Meets Source-free Unsupervised Domain Adaptation [34.08681468394247]
Source-free Unsupervised Domain Adaptation aims to classify target samples by only accessing a pre-trained source model and unlabelled target samples.
Existing methods normally exploit the pair-wise relation among target samples and attempt to discover their correlations by clustering these samples based on semantic features.
We propose a new SFDA method that exploits the high-order neighborhood relation and explicitly takes the domain shift effect into account.
arXiv Detail & Related papers (2024-05-11T05:07:43Z) - SIDE: Self-supervised Intermediate Domain Exploration for Source-free
Domain Adaptation [36.470026809824674]
Domain adaptation aims to alleviate the domain shift when transferring the knowledge learned from the source domain to the target domain.
Due to privacy issues, source-free domain adaptation (SFDA) has recently become very demanding yet challenging.
This paper proposes self-supervised intermediate domain exploration (SIDE) that effectively bridges the domain gap with an intermediate domain.
arXiv Detail & Related papers (2023-10-13T07:50:37Z) - Upcycling Models under Domain and Category Shift [95.22147885947732]
We introduce an innovative global and local clustering learning technique (GLC)
We design a novel, adaptive one-vs-all global clustering algorithm to achieve the distinction across different target classes.
Remarkably, in the most challenging open-partial-set DA scenario, GLC outperforms UMAD by 14.8% on the VisDA benchmark.
arXiv Detail & Related papers (2023-03-13T13:44:04Z) - Generative Model Based Noise Robust Training for Unsupervised Domain
Adaptation [108.11783463263328]
This paper proposes a Generative model-based Noise-Robust Training method (GeNRT)
It eliminates domain shift while mitigating label noise.
Experiments on Office-Home, PACS, and Digit-Five show that our GeNRT achieves comparable performance to state-of-the-art methods.
arXiv Detail & Related papers (2023-03-10T06:43:55Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Polycentric Clustering and Structural Regularization for Source-free
Unsupervised Domain Adaptation [20.952542421577487]
Source-Free Domain Adaptation (SFDA) aims to solve the domain adaptation problem by transferring the knowledge learned from a pre-trained source model to an unseen target domain.
Most existing methods assign pseudo-labels to the target data by generating feature prototypes.
In this paper, a novel framework named PCSR is proposed to tackle SFDA via a novel intra-class Polycentric Clustering and Structural Regularization strategy.
arXiv Detail & Related papers (2022-10-14T02:20:48Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Casting a BAIT for Offline and Online Source-free Domain Adaptation [51.161476418834766]
We address the source-free domain adaptation (SFDA) problem, where only the source model is available during adaptation to the target domain.
Inspired by diverse classifier based domain adaptation methods, in this paper we introduce a second classifier.
When adapting to the target domain, the additional classifier from source is expected to find misclassified features.
Our method surpasses by a large margin other SFDA methods under online source-free domain adaptation setting.
arXiv Detail & Related papers (2020-10-23T14:18:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.