Model Adaptation: Historical Contrastive Learning for Unsupervised
Domain Adaptation without Source Data
- URL: http://arxiv.org/abs/2110.03374v1
- Date: Thu, 7 Oct 2021 12:13:00 GMT
- Title: Model Adaptation: Historical Contrastive Learning for Unsupervised
Domain Adaptation without Source Data
- Authors: Jiaxing Huang, Dayan Guan, Aoran Xiao, Shijian Lu
- Abstract summary: Unsupervised domain adaptation aims to align a labeled source domain and an unlabeled target domain without accessing source data.
We design an innovative historical contrastive learning (HCL) technique that exploits historical source hypothesis to make up for the absence of source data in UMA.
HCL outperforms and complements state-of-the-art methods consistently across a variety of visual tasks.
- Score: 32.77436219094282
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation aims to align a labeled source domain and an
unlabeled target domain, but it requires to access the source data which often
raises concerns in data privacy, data portability and data transmission
efficiency. We study unsupervised model adaptation (UMA), or called
Unsupervised Domain Adaptation without Source Data, an alternative setting that
aims to adapt source-trained models towards target distributions without
accessing source data. To this end, we design an innovative historical
contrastive learning (HCL) technique that exploits historical source hypothesis
to make up for the absence of source data in UMA. HCL addresses the UMA
challenge from two perspectives. First, it introduces historical contrastive
instance discrimination (HCID) that learns from target samples by contrasting
their embeddings which are generated by the currently adapted model and the
historical models. With the source-trained and earlier-epoch models as the
historical models, HCID encourages UMA to learn instance-discriminative target
representations while preserving the source hypothesis. Second, it introduces
historical contrastive category discrimination (HCCD) that pseudo-labels target
samples to learn category-discriminative target representations. Instead of
globally thresholding pseudo labels, HCCD re-weights pseudo labels according to
their prediction consistency across the current and historical models.
Extensive experiments show that HCL outperforms and complements
state-of-the-art methods consistently across a variety of visual tasks (e.g.,
segmentation, classification and detection) and setups (e.g., close-set,
open-set and partial adaptation).
Related papers
- Uncertainty-guided Open-Set Source-Free Unsupervised Domain Adaptation with Target-private Class Segregation [22.474866164542302]
UDA approaches commonly assume that source and target domains share the same labels space.
This paper considers the more challenging Source-Free Open-set Domain Adaptation (SF-OSDA) setting.
We propose a novel approach for SF-OSDA that exploits the granularity of target-private categories by segregating their samples into multiple unknown classes.
arXiv Detail & Related papers (2024-04-16T13:52:00Z) - Continual Source-Free Unsupervised Domain Adaptation [37.060694803551534]
Existing Source-free Unsupervised Domain Adaptation approaches exhibit catastrophic forgetting.
We propose a Continual SUDA (C-SUDA) framework to cope with the challenge of SUDA in a continual learning setting.
arXiv Detail & Related papers (2023-04-14T20:11:05Z) - A Prototype-Oriented Clustering for Domain Shift with Source Privacy [66.67700676888629]
We introduce Prototype-oriented Clustering with Distillation (PCD) to improve the performance and applicability of existing methods.
PCD first constructs a source clustering model by aligning the distributions of prototypes and data.
It then distills the knowledge to the target model through cluster labels provided by the source model while simultaneously clustering the target data.
arXiv Detail & Related papers (2023-02-08T00:15:35Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Unsupervised Adaptation of Semantic Segmentation Models without Source
Data [14.66682099621276]
We consider the novel problem of unsupervised domain adaptation of source models, without access to the source data for semantic segmentation.
We propose a self-training approach to extract the knowledge from the source model.
Our framework is able to achieve significant performance gains compared to directly applying the source model on the target data.
arXiv Detail & Related papers (2021-12-04T15:13:41Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.