Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning
- URL: http://arxiv.org/abs/2211.06612v1
- Date: Sat, 12 Nov 2022 09:21:49 GMT
- Title: Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning
- Authors: Ziyi Zhang, Weikai Chen, Hui Cheng, Zhen Li, Siyuan Li, Liang Lin,
Guanbin Li
- Abstract summary: Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
- Score: 122.62311703151215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate a practical domain adaptation task, called source-free domain
adaptation (SFUDA), where the source-pretrained model is adapted to the target
domain without access to the source data. Existing techniques mainly leverage
self-supervised pseudo labeling to achieve class-wise global alignment [1] or
rely on local structure extraction that encourages feature consistency among
neighborhoods [2]. While impressive progress has been made, both lines of
methods have their own drawbacks - the "global" approach is sensitive to noisy
labels while the "local" counterpart suffers from source bias. In this paper,
we present Divide and Contrast (DaC), a new paradigm for SFUDA that strives to
connect the good ends of both worlds while bypassing their limitations. Based
on the prediction confidence of the source model, DaC divides the target data
into source-like and target-specific samples, where either group of samples is
treated with tailored goals under an adaptive contrastive learning framework.
Specifically, the source-like samples are utilized for learning global class
clustering thanks to their relatively clean labels. The more noisy
target-specific data are harnessed at the instance level for learning the
intrinsic local structures. We further align the source-like domain with the
target-specific samples using a memory bank-based Maximum Mean Discrepancy
(MMD) loss to reduce the distribution mismatch. Extensive experiments on VisDA,
Office-Home, and the more challenging DomainNet have verified the superior
performance of DaC over current state-of-the-art approaches. The code is
available at https://github.com/ZyeZhang/DaC.git.
Related papers
- Chaos to Order: A Label Propagation Perspective on Source-Free Domain
Adaptation [8.27771856472078]
We present Chaos to Order (CtO), a novel approach for source-free domain adaptation (SFDA)
CtO strives to constrain semantic credibility and propagate label information among target subpopulations.
Empirical evidence demonstrates that CtO outperforms the state of the arts on three public benchmarks.
arXiv Detail & Related papers (2023-01-20T03:39:35Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.