MixNorm: Test-Time Adaptation Through Online Normalization Estimation
- URL: http://arxiv.org/abs/2110.11478v1
- Date: Thu, 21 Oct 2021 21:04:42 GMT
- Title: MixNorm: Test-Time Adaptation Through Online Normalization Estimation
- Authors: Xuefeng Hu, Gokhan Uzunbas, Sirius Chen, Rui Wang, Ashish Shah, Ram
Nevatia and Ser-Nam Lim
- Abstract summary: We present a simple and effective way to estimate the batch-norm statistics during test time, to fast adapt a source model to target test samples.
Known as Test-Time Adaptation, most prior works studying this task follow two assumptions in their evaluation where (1) test samples come together as a large batch, and (2) all from a single test distribution.
- Score: 35.65295482033232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a simple and effective way to estimate the batch-norm statistics
during test time, to fast adapt a source model to target test samples. Known as
Test-Time Adaptation, most prior works studying this task follow two
assumptions in their evaluation where (1) test samples come together as a large
batch, and (2) all from a single test distribution. However, in practice, these
two assumptions may not stand, the reasons for which we propose two new
evaluation settings where batch sizes are arbitrary and multiple distributions
are considered. Unlike the previous methods that require a large batch of
single distribution during test time to calculate stable batch-norm statistics,
our method avoid any dependency on large online batches and is able to estimate
accurate batch-norm statistics with a single sample. The proposed method
significantly outperforms the State-Of-The-Art in the newly proposed settings
in Test-Time Adaptation Task, and also demonstrates improvements in various
other settings such as Source-Free Unsupervised Domain Adaptation and Zero-Shot
Classification.
Related papers
- DOTA: Distributional Test-Time Adaptation of Vision-Language Models [52.98590762456236]
Training-free test-time dynamic adapter (TDA) is a promising approach to address this issue.
We propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota)
Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment.
arXiv Detail & Related papers (2024-09-28T15:03:28Z) - Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - Unraveling Batch Normalization for Realistic Test-Time Adaptation [22.126177142716188]
This paper delves into the problem of mini-batch degradation.
By unraveling batch normalization, we discover that the inexact target statistics largely stem from the substantially reduced class diversity in batch.
We introduce a straightforward tool, Test-time Exponential Moving Average (TEMA), to bridge the class diversity gap between training and testing batches.
arXiv Detail & Related papers (2023-12-15T01:52:35Z) - On Pitfalls of Test-Time Adaptation [82.8392232222119]
Test-Time Adaptation (TTA) has emerged as a promising approach for tackling the robustness challenge under distribution shifts.
We present TTAB, a test-time adaptation benchmark that encompasses ten state-of-the-art algorithms, a diverse array of distribution shifts, and two evaluation protocols.
arXiv Detail & Related papers (2023-06-06T09:35:29Z) - Robust Test-Time Adaptation in Dynamic Scenarios [9.475271284789969]
Test-time adaptation (TTA) intends to adapt the pretrained model to test distributions with only unlabeled test data streams.
We elaborate a Robust Test-Time Adaptation (RoTTA) method against the complex data stream in PTTA.
Our method is easy to implement, making it a good choice for rapid deployment.
arXiv Detail & Related papers (2023-03-24T10:19:14Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - Robust Continual Test-time Adaptation: Instance-aware BN and
Prediction-balanced Memory [58.72445309519892]
We present a new test-time adaptation scheme that is robust against non-i.i.d. test data streams.
Our novelty is mainly two-fold: (a) Instance-Aware Batch Normalization (IABN) that corrects normalization for out-of-distribution samples, and (b) Prediction-balanced Reservoir Sampling (PBRS) that simulates i.i.d. data stream from non-i.i.d. stream in a class-balanced manner.
arXiv Detail & Related papers (2022-08-10T03:05:46Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.