Few Clicks Suffice: Active Test-Time Adaptation for Semantic
Segmentation
- URL: http://arxiv.org/abs/2312.01835v1
- Date: Mon, 4 Dec 2023 12:16:02 GMT
- Title: Few Clicks Suffice: Active Test-Time Adaptation for Semantic
Segmentation
- Authors: Longhui Yuan and Shuang Li and Zhuo He and Binhui Xie
- Abstract summary: Test-time adaptation (TTA) adapts pre-trained models during inference using unlabeled test data.
There is still a significant performance gap between the TTA approaches and their supervised counterparts.
We propose ATASeg framework, which consists of two parts, i.e., model adapter and label annotator.
- Score: 14.112999441288615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Test-time adaptation (TTA) adapts the pre-trained models during inference
using unlabeled test data and has received a lot of research attention due to
its potential practical value. Unfortunately, without any label supervision,
existing TTA methods rely heavily on heuristic or empirical studies. Where to
update the model always falls into suboptimal or brings more computational
resource consumption. Meanwhile, there is still a significant performance gap
between the TTA approaches and their supervised counterparts. Motivated by
active learning, in this work, we propose the active test-time adaptation for
semantic segmentation setup. Specifically, we introduce the human-in-the-loop
pattern during the testing phase, which queries very few labels to facilitate
predictions and model updates in an online manner. To do so, we propose a
simple but effective ATASeg framework, which consists of two parts, i.e., model
adapter and label annotator. Extensive experiments demonstrate that ATASeg
bridges the performance gap between TTA methods and their supervised
counterparts with only extremely few annotations, even one click for labeling
surpasses known SOTA TTA methods by 2.6% average mIoU on ACDC benchmark.
Empirical results imply that progress in either the model adapter or the label
annotator will bring improvements to the ATASeg framework, giving it large
research and reality potential.
Related papers
- BoostAdapter: Improving Vision-Language Test-Time Adaptation via Regional Bootstrapping [64.8477128397529]
We propose a training-required and training-free test-time adaptation framework.
We maintain a light-weight key-value memory for feature retrieval from instance-agnostic historical samples and instance-aware boosting samples.
We theoretically justify the rationality behind our method and empirically verify its effectiveness on both the out-of-distribution and the cross-domain datasets.
arXiv Detail & Related papers (2024-10-20T15:58:43Z) - Active Test-Time Adaptation: Theoretical Analyses and An Algorithm [51.84691955495693]
Test-time adaptation (TTA) addresses distribution shifts for streaming test data in unsupervised settings.
We propose the novel problem setting of active test-time adaptation (ATTA) that integrates active learning within the fully TTA setting.
arXiv Detail & Related papers (2024-04-07T22:31:34Z) - Improving Entropy-Based Test-Time Adaptation from a Clustering View [15.157208389691238]
We introduce a new clustering perspective on the entropy-based TTA.
We propose to improve EBTTA from the assignment step and the updating step, where robust label assignment, similarity-preserving constraint, sample selection, and gradient accumulation are proposed.
Experimental results demonstrate that our method can achieve consistent improvements on various datasets.
arXiv Detail & Related papers (2023-10-31T10:10:48Z) - From Question to Exploration: Test-Time Adaptation in Semantic Segmentation? [21.27237423511349]
Test-time adaptation (TTA) aims to adapt a model, initially trained on training data, to test data with potential distribution shifts.
We investigate the applicability of existing classic TTA strategies in semantic segmentation.
arXiv Detail & Related papers (2023-10-09T01:59:49Z) - Test-Time Adaptation with Perturbation Consistency Learning [32.58879780726279]
We propose a simple test-time adaptation method to promote the model to make stable predictions for samples with distribution shifts.
Our method can achieve higher or comparable performance with less inference time over strong PLM backbones.
arXiv Detail & Related papers (2023-04-25T12:29:22Z) - Improved Test-Time Adaptation for Domain Generalization [48.239665441875374]
Test-time training (TTT) adapts the learned model with test data.
This work addresses two main factors: selecting an appropriate auxiliary TTT task for updating and identifying reliable parameters to update during the test phase.
We introduce additional adaptive parameters for the trained model, and we suggest only updating the adaptive parameters during the test phase.
arXiv Detail & Related papers (2023-04-10T10:12:38Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Towards Stable Test-Time Adaptation in Dynamic Wild World [60.98073673220025]
Test-time adaptation (TTA) has shown to be effective at tackling distribution shifts between training and testing data by adapting a given model on test samples.
Online model updating of TTA may be unstable and this is often a key obstacle preventing existing TTA methods from being deployed in the real world.
arXiv Detail & Related papers (2023-02-24T02:03:41Z) - Robust Continual Test-time Adaptation: Instance-aware BN and
Prediction-balanced Memory [58.72445309519892]
We present a new test-time adaptation scheme that is robust against non-i.i.d. test data streams.
Our novelty is mainly two-fold: (a) Instance-Aware Batch Normalization (IABN) that corrects normalization for out-of-distribution samples, and (b) Prediction-balanced Reservoir Sampling (PBRS) that simulates i.i.d. data stream from non-i.i.d. stream in a class-balanced manner.
arXiv Detail & Related papers (2022-08-10T03:05:46Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.