Guiding Pseudo-labels with Uncertainty Estimation for Test-Time
Adaptation
- URL: http://arxiv.org/abs/2303.03770v1
- Date: Tue, 7 Mar 2023 10:04:55 GMT
- Title: Guiding Pseudo-labels with Uncertainty Estimation for Test-Time
Adaptation
- Authors: Mattia Litrico, Alessio Del Bue, Pietro Morerio
- Abstract summary: Test-Time Adaptation (TTA) is a specific case of Unsupervised Domain Adaptation (UDA) where a model is adapted to a target domain without access to source data.
We propose a novel approach for the TTA setting based on a loss reweighting strategy that brings robustness against the noise that inevitably affects the pseudo-labels.
- Score: 27.233704767025174
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Standard Unsupervised Domain Adaptation (UDA) methods assume the availability
of both source and target data during the adaptation. In this work, we
investigate the Test-Time Adaptation (TTA), a specific case of UDA where a
model is adapted to a target domain without access to source data. We propose a
novel approach for the TTA setting based on a loss reweighting strategy that
brings robustness against the noise that inevitably affects the pseudo-labels.
The classification loss is reweighted based on the reliability of the
pseudo-labels that is measured by estimating their uncertainty. Guided by such
reweighting strategy, the pseudo-labels are progressively refined by
aggregating knowledge from neighbouring samples. Furthermore, a self-supervised
contrastive framework is leveraged as a target space regulariser to enhance
such knowledge aggregation. A novel negative pairs exclusion strategy is
proposed to identify and exclude negative pairs made of samples sharing the
same class, even in presence of some noise in the pseudo-labels. Our method
outperforms previous methods on three major benchmarks by a large margin. We
set the new TTA state-of-the-art on VisDA-C and DomainNet with a performance
gain of +1.8\% on both benchmarks and on PACS with +12.3\% in the single-source
setting and +6.6\% in\ multi-target adaptation. Additional analyses demonstrate
that the proposed approach is robust to the noise, which results in
significantly more accurate pseudo-labels compared to state-of-the-art
approaches.
Related papers
- Self Adaptive Threshold Pseudo-labeling and Unreliable Sample Contrastive Loss for Semi-supervised Image Classification [6.920336485308536]
Pseudo-labeling-based semi-supervised approaches suffer from two problems in image classification.
We develop a self adaptive threshold pseudo-labeling strategy, which thresholds for each class can be dynamically adjusted to increase the number of reliable samples.
In order to effectively utilise unlabeled data with confidence below the thresholds, we propose an unreliable sample contrastive loss.
arXiv Detail & Related papers (2024-07-04T03:04:56Z) - Uncertainty-guided Open-Set Source-Free Unsupervised Domain Adaptation with Target-private Class Segregation [22.474866164542302]
UDA approaches commonly assume that source and target domains share the same labels space.
This paper considers the more challenging Source-Free Open-set Domain Adaptation (SF-OSDA) setting.
We propose a novel approach for SF-OSDA that exploits the granularity of target-private categories by segregating their samples into multiple unknown classes.
arXiv Detail & Related papers (2024-04-16T13:52:00Z) - Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - Exploiting Low-confidence Pseudo-labels for Source-free Object Detection [54.98300313452037]
Source-free object detection (SFOD) aims to adapt a source-trained detector to an unlabeled target domain without access to the labeled source data.
Current SFOD methods utilize a threshold-based pseudo-label approach in the adaptation phase.
We propose a new approach to take full advantage of pseudo-labels by introducing high and low confidence thresholds.
arXiv Detail & Related papers (2023-10-19T12:59:55Z) - Delving into Probabilistic Uncertainty for Unsupervised Domain Adaptive
Person Re-Identification [54.174146346387204]
We propose an approach named probabilistic uncertainty guided progressive label refinery (P$2$LR) for domain adaptive person re-identification.
A quantitative criterion is established to measure the uncertainty of pseudo labels and facilitate the network training.
Our method outperforms the baseline by 6.5% mAP on the Duke2Market task, while surpassing the state-of-the-art method by 2.5% mAP on the Market2MSMT task.
arXiv Detail & Related papers (2021-12-28T07:40:12Z) - Unsupervised Robust Domain Adaptation without Source Data [75.85602424699447]
We study the problem of robust domain adaptation in the context of unavailable target labels and source data.
We show a consistent performance improvement of over $10%$ in accuracy against the tested baselines on four benchmark datasets.
arXiv Detail & Related papers (2021-03-26T16:42:28Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.