Parameter-free Online Test-time Adaptation
- URL: http://arxiv.org/abs/2201.05718v1
- Date: Sat, 15 Jan 2022 00:29:16 GMT
- Title: Parameter-free Online Test-time Adaptation
- Authors: Malik Boudiaf, Romain Mueller, Ismail Ben Ayed, Luca Bertinetto
- Abstract summary: We show how test-time adaptation methods fare for a number of pre-trained models on a variety of real-world scenarios.
We propose a particularly "conservative" approach, which addresses the problem with a Laplacian Adjusted Maximum Estimation (LAME)
Our approach exhibits a much higher average accuracy across scenarios than existing methods, while being notably faster and have a much lower memory footprint.
- Score: 19.279048049267388
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training state-of-the-art vision models has become prohibitively expensive
for researchers and practitioners. For the sake of accessibility and resource
reuse, it is important to focus on adapting these models to a variety of
downstream scenarios. An interesting and practical paradigm is online test-time
adaptation, according to which training data is inaccessible, no labelled data
from the test distribution is available, and adaptation can only happen at test
time and on a handful of samples. In this paper, we investigate how test-time
adaptation methods fare for a number of pre-trained models on a variety of
real-world scenarios, significantly extending the way they have been originally
evaluated. We show that they perform well only in narrowly-defined experimental
setups and sometimes fail catastrophically when their hyperparameters are not
selected for the same scenario in which they are being tested. Motivated by the
inherent uncertainty around the conditions that will ultimately be encountered
at test time, we propose a particularly "conservative" approach, which
addresses the problem with a Laplacian Adjusted Maximum-likelihood Estimation
(LAME) objective. By adapting the model's output (not its parameters), and
solving our objective with an efficient concave-convex procedure, our approach
exhibits a much higher average accuracy across scenarios than existing methods,
while being notably faster and have a much lower memory footprint. Code
available at https://github.com/fiveai/LAME.
Related papers
- DOTA: Distributional Test-Time Adaptation of Vision-Language Models [52.98590762456236]
Training-free test-time dynamic adapter (TDA) is a promising approach to address this issue.
We propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota)
Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment.
arXiv Detail & Related papers (2024-09-28T15:03:28Z) - On Pitfalls of Test-Time Adaptation [82.8392232222119]
Test-Time Adaptation (TTA) has emerged as a promising approach for tackling the robustness challenge under distribution shifts.
We present TTAB, a test-time adaptation benchmark that encompasses ten state-of-the-art algorithms, a diverse array of distribution shifts, and two evaluation protocols.
arXiv Detail & Related papers (2023-06-06T09:35:29Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z) - Model adaptation and unsupervised learning with non-stationary batch
data under smooth concept drift [8.068725688880772]
Most predictive models assume that training and test data are generated from a stationary process.
We consider the scenario of a gradual concept drift due to the underlying non-stationarity of the data source.
We propose a novel, iterative algorithm for unsupervised adaptation of predictive models.
arXiv Detail & Related papers (2020-02-10T21:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.