Effortless Active Labeling for Long-Term Test-Time Adaptation
- URL: http://arxiv.org/abs/2503.14564v1
- Date: Tue, 18 Mar 2025 07:49:27 GMT
- Title: Effortless Active Labeling for Long-Term Test-Time Adaptation
- Authors: Guowei Wang, Changxing Ding,
- Abstract summary: Long-term test-time adaptation is a challenging task due to error accumulation.<n>Recent approaches tackle this issue by actively labeling a small proportion of samples in each batch.<n>In this paper, we investigate how to achieve effortless active labeling so that a maximum of one sample is selected for annotation in each batch.
- Score: 18.02130603595324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Long-term test-time adaptation (TTA) is a challenging task due to error accumulation. Recent approaches tackle this issue by actively labeling a small proportion of samples in each batch, yet the annotation burden quickly grows as the batch number increases. In this paper, we investigate how to achieve effortless active labeling so that a maximum of one sample is selected for annotation in each batch. First, we annotate the most valuable sample in each batch based on the single-step optimization perspective in the TTA context. In this scenario, the samples that border between the source- and target-domain data distributions are considered the most feasible for the model to learn in one iteration. Then, we introduce an efficient strategy to identify these samples using feature perturbation. Second, we discover that the gradient magnitudes produced by the annotated and unannotated samples have significant variations. Therefore, we propose balancing their impact on model optimization using two dynamic weights. Extensive experiments on the popular ImageNet-C, -R, -K, -A and PACS databases demonstrate that our approach consistently outperforms state-of-the-art methods with significantly lower annotation costs.
Related papers
- SPARNet: Continual Test-Time Adaptation via Sample Partitioning Strategy and Anti-Forgetting Regularization [16.5927083825258]
Test-time Adaptation (TTA) aims to improve model performance when the model encounters domain changes after deployment.
Noisy pseudo-labels produced by simple self-training methods can cause error accumulation and catastrophic forgetting.
We propose a new framework named SPARNet which consists of two parts, sample partitioning strategy and anti-forgetting regularization.
arXiv Detail & Related papers (2025-01-01T12:19:17Z) - Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - ActiveDC: Distribution Calibration for Active Finetuning [36.64444238742072]
We propose a new method called ActiveDC for the active finetuning tasks.
We calibrate the distribution of the selected samples by exploiting implicit category information in the unlabeled pool.
The results indicate that ActiveDC consistently outperforms the baseline performance in all image classification tasks.
arXiv Detail & Related papers (2023-11-13T14:35:18Z) - Improving Entropy-Based Test-Time Adaptation from a Clustering View [15.157208389691238]
We introduce a new clustering perspective on the entropy-based TTA.
We propose to improve EBTTA from the assignment step and the updating step, where robust label assignment, similarity-preserving constraint, sample selection, and gradient accumulation are proposed.
Experimental results demonstrate that our method can achieve consistent improvements on various datasets.
arXiv Detail & Related papers (2023-10-31T10:10:48Z) - IDEAL: Influence-Driven Selective Annotations Empower In-Context
Learners in Large Language Models [66.32043210237768]
This paper introduces an influence-driven selective annotation method.
It aims to minimize annotation costs while improving the quality of in-context examples.
Experiments confirm the superiority of the proposed method on various benchmarks.
arXiv Detail & Related papers (2023-10-16T22:53:54Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Jo-SRC: A Contrastive Approach for Combating Noisy Labels [58.867237220886885]
We propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency)
Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution.
arXiv Detail & Related papers (2021-03-24T07:26:07Z) - Multi-Scale Positive Sample Refinement for Few-Shot Object Detection [61.60255654558682]
Few-shot object detection (FSOD) helps detectors adapt to unseen classes with few training instances.
We propose a Multi-scale Positive Sample Refinement (MPSR) approach to enrich object scales in FSOD.
MPSR generates multi-scale positive samples as object pyramids and refines the prediction at various scales.
arXiv Detail & Related papers (2020-07-18T09:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.