Feature Alignment and Uniformity for Test Time Adaptation
- URL: http://arxiv.org/abs/2303.10902v3
- Date: Sun, 21 May 2023 04:42:08 GMT
- Title: Feature Alignment and Uniformity for Test Time Adaptation
- Authors: Shuai Wang, Daoan Zhang, Zipei Yan, Jianguo Zhang, Rui Li
- Abstract summary: Test time adaptation aims to adapt deep neural networks when receiving out of distribution test domain samples.
In this setting, the model can only access online unlabeled test samples and pre-trained models on the training domains.
- Score: 8.209137567840811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Test time adaptation (TTA) aims to adapt deep neural networks when receiving
out of distribution test domain samples. In this setting, the model can only
access online unlabeled test samples and pre-trained models on the training
domains. We first address TTA as a feature revision problem due to the domain
gap between source domains and target domains. After that, we follow the two
measurements alignment and uniformity to discuss the test time feature
revision. For test time feature uniformity, we propose a test time
self-distillation strategy to guarantee the consistency of uniformity between
representations of the current batch and all the previous batches. For test
time feature alignment, we propose a memorized spatial local clustering
strategy to align the representations among the neighborhood samples for the
upcoming batch. To deal with the common noisy label problem, we propound the
entropy and consistency filters to select and drop the possible noisy labels.
To prove the scalability and efficacy of our method, we conduct experiments on
four domain generalization benchmarks and four medical image segmentation tasks
with various backbones. Experiment results show that our method not only
improves baseline stably but also outperforms existing state-of-the-art test
time adaptation methods. Code is available at
\href{https://github.com/SakurajimaMaiii/TSD}{https://github.com/SakurajimaMaiii/TSD}.
Related papers
- DOTA: Distributional Test-Time Adaptation of Vision-Language Models [52.98590762456236]
Training-free test-time dynamic adapter (TDA) is a promising approach to address this issue.
We propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota)
Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment.
arXiv Detail & Related papers (2024-09-28T15:03:28Z) - Channel-Selective Normalization for Label-Shift Robust Test-Time Adaptation [16.657929958093824]
Test-time adaptation is an approach to adjust models to a new data distribution during inference.
Test-time batch normalization is a simple and popular method that achieved compelling performance on domain shift benchmarks.
We propose to tackle this challenge by only selectively adapting channels in a deep network, minimizing drastic adaptation that is sensitive to label shifts.
arXiv Detail & Related papers (2024-02-07T15:41:01Z) - Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - Align Your Prompts: Test-Time Prompting with Distribution Alignment for
Zero-Shot Generalization [64.62570402941387]
We use a single test sample to adapt multi-modal prompts at test time by minimizing the feature distribution shift to bridge the gap in the test domain.
Our method improves zero-shot top- 1 accuracy beyond existing prompt-learning techniques, with a 3.08% improvement over the baseline MaPLe.
arXiv Detail & Related papers (2023-11-02T17:59:32Z) - TeST: Test-time Self-Training under Distribution Shift [99.68465267994783]
Test-Time Self-Training (TeST) is a technique that takes as input a model trained on some source data and a novel data distribution at test time.
We find that models adapted using TeST significantly improve over baseline test-time adaptation algorithms.
arXiv Detail & Related papers (2022-09-23T07:47:33Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Continual Test-Time Domain Adaptation [94.51284735268597]
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain without using any source data.
CoTTA is easy to implement and can be readily incorporated in off-the-shelf pre-trained models.
arXiv Detail & Related papers (2022-03-25T11:42:02Z) - MixNorm: Test-Time Adaptation Through Online Normalization Estimation [35.65295482033232]
We present a simple and effective way to estimate the batch-norm statistics during test time, to fast adapt a source model to target test samples.
Known as Test-Time Adaptation, most prior works studying this task follow two assumptions in their evaluation where (1) test samples come together as a large batch, and (2) all from a single test distribution.
arXiv Detail & Related papers (2021-10-21T21:04:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.