Contrastive Test-Time Adaptation
- URL: http://arxiv.org/abs/2204.10377v1
- Date: Thu, 21 Apr 2022 19:17:22 GMT
- Title: Contrastive Test-Time Adaptation
- Authors: Dian Chen, Dequan Wang, Trevor Darrell, Sayna Ebrahimi
- Abstract summary: We propose a novel way to leverage self-supervised contrastive learning to facilitate target feature learning.
We produce pseudo labels online and refine them via soft voting among their nearest neighbors in the target feature space.
Our method, AdaContrast, achieves state-of-the-art performance on major benchmarks.
- Score: 83.73506803142693
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Test-time adaptation is a special setting of unsupervised domain adaptation
where a trained model on the source domain has to adapt to the target domain
without accessing source data. We propose a novel way to leverage
self-supervised contrastive learning to facilitate target feature learning,
along with an online pseudo labeling scheme with refinement that significantly
denoises pseudo labels. The contrastive learning task is applied jointly with
pseudo labeling, contrasting positive and negative pairs constructed similarly
as MoCo but with source-initialized encoder, and excluding same-class negative
pairs indicated by pseudo labels. Meanwhile, we produce pseudo labels online
and refine them via soft voting among their nearest neighbors in the target
feature space, enabled by maintaining a memory queue. Our method, AdaContrast,
achieves state-of-the-art performance on major benchmarks while having several
desirable properties compared to existing works, including memory efficiency,
insensitivity to hyper-parameters, and better model calibration. Project page:
sites.google.com/view/adacontrast.
Related papers
- Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Towards Adaptive Pseudo-label Learning for Semi-Supervised Temporal Action Localization [10.233225586034665]
Existing methods often filter pseudo labels based on strict conditions, leading to suboptimal pseudo-label ranking and selection.
We propose a novel Adaptive Pseudo-label Learning framework to facilitate better pseudo-label selection.
Our method achieves state-of-the-art performance under various semi-supervised settings.
arXiv Detail & Related papers (2024-07-10T14:00:19Z) - Efficient Test-Time Adaptation of Vision-Language Models [58.3646257833533]
Test-time adaptation with pre-trained vision-language models has attracted increasing attention for tackling distribution shifts during the test time.
We design TDA, a training-free dynamic adapter that enables effective and efficient test-time adaptation with vision-language models.
arXiv Detail & Related papers (2024-03-27T06:37:51Z) - Unsupervised Domain Adaptation for Semantic Segmentation with Pseudo
Label Self-Refinement [9.69089112870202]
We propose an auxiliary pseudo-label refinement network (PRN) for online refining of the pseudo labels and also localizing the pixels whose predicted labels are likely to be noisy.
We evaluate our approach on benchmark datasets with three different domain shifts, and our approach consistently performs significantly better than the previous state-of-the-art methods.
arXiv Detail & Related papers (2023-10-25T20:31:07Z) - Semi-Supervised Learning of Semantic Correspondence with Pseudo-Labels [26.542718087103665]
SemiMatch is a semi-supervised solution for establishing dense correspondences across semantically similar images.
Our framework generates the pseudo-labels using the model's prediction itself between source and weakly-augmented target, and uses pseudo-labels to learn the model again between source and strongly-augmented target.
In experiments, SemiMatch achieves state-of-the-art performance on various benchmarks, especially on PF-Willow by a large margin.
arXiv Detail & Related papers (2022-03-30T03:52:50Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Adaptive Pseudo-Label Refinement by Negative Ensemble Learning for
Source-Free Unsupervised Domain Adaptation [35.728603077621564]
Existing Unsupervised Domain Adaptation (UDA) methods presumes source and target domain data to be simultaneously available during training.
A pre-trained source model is always considered to be available, even though performing poorly on target due to the well-known domain shift problem.
We propose a unified method to tackle adaptive noise filtering and pseudo-label refinement.
arXiv Detail & Related papers (2021-03-29T22:18:34Z) - Prototypical Pseudo Label Denoising and Target Structure Learning for
Domain Adaptive Semantic Segmentation [24.573242887937834]
A competitive approach in domain adaptive segmentation trains the network with the pseudo labels on the target domain.
We take one step further and exploit the feature distances from prototypes that provide richer information than mere prototypes.
We find that distilling the already learned knowledge to a self-supervised pretrained model further boosts the performance.
arXiv Detail & Related papers (2021-01-26T18:12:54Z) - Dual-Refinement: Joint Label and Feature Refinement for Unsupervised
Domain Adaptive Person Re-Identification [51.98150752331922]
Unsupervised domain adaptive (UDA) person re-identification (re-ID) is a challenging task due to the missing of labels for the target domain data.
We propose a novel approach, called Dual-Refinement, that jointly refines pseudo labels at the off-line clustering phase and features at the on-line training phase.
Our method outperforms the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-26T07:35:35Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.