REALM: Robust Entropy Adaptive Loss Minimization for Improved
Single-Sample Test-Time Adaptation
- URL: http://arxiv.org/abs/2309.03964v1
- Date: Thu, 7 Sep 2023 18:44:58 GMT
- Title: REALM: Robust Entropy Adaptive Loss Minimization for Improved
Single-Sample Test-Time Adaptation
- Authors: Skyler Seto, Barry-John Theobald, Federico Danieli, Navdeep Jaitly,
Dan Busbridge
- Abstract summary: Fully-test-time adaptation (F-TTA) can mitigate performance loss due to distribution shifts between train and test data.
We present a general framework for improving robustness of F-TTA to noisy samples, inspired by self-paced learning and robust loss functions.
- Score: 5.749155230209001
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fully-test-time adaptation (F-TTA) can mitigate performance loss due to
distribution shifts between train and test data (1) without access to the
training data, and (2) without knowledge of the model training procedure. In
online F-TTA, a pre-trained model is adapted using a stream of test samples by
minimizing a self-supervised objective, such as entropy minimization. However,
models adapted with online using entropy minimization, are unstable especially
in single sample settings, leading to degenerate solutions, and limiting the
adoption of TTA inference strategies. Prior works identify noisy, or
unreliable, samples as a cause of failure in online F-TTA. One solution is to
ignore these samples, which can lead to bias in the update procedure, slow
adaptation, and poor generalization. In this work, we present a general
framework for improving robustness of F-TTA to these noisy samples, inspired by
self-paced learning and robust loss functions. Our proposed approach, Robust
Entropy Adaptive Loss Minimization (REALM), achieves better adaptation accuracy
than previous approaches throughout the adaptation process on corruptions of
CIFAR-10 and ImageNet-1K, demonstrating its effectiveness.
Related papers
- COME: Test-time adaption by Conservatively Minimizing Entropy [45.689829178140634]
Conservatively Minimize the Entropy (COME) is a drop-in replacement of traditional entropy (EM)
COME explicitly models the uncertainty by characterizing a Dirichlet prior distribution over model predictions.
We show that COME achieves state-of-the-art performance on commonly used benchmarks.
arXiv Detail & Related papers (2024-10-12T09:20:06Z) - Meta-TTT: A Meta-learning Minimax Framework For Test-Time Training [5.9631503543049895]
Test-time domain adaptation is a challenging task that aims to adapt a pre-trained model to limited, unlabeled target data during inference.
This paper introduces a meta-learning minimax framework for test-time training on batch normalization layers.
arXiv Detail & Related papers (2024-10-02T16:16:05Z) - ETAGE: Enhanced Test Time Adaptation with Integrated Entropy and Gradient Norms for Robust Model Performance [18.055032898349438]
Test time adaptation (TTA) equips deep learning models to handle unseen test data that deviates from the training distribution.
We introduce ETAGE, a refined TTA method that integrates entropy minimization with gradient norms and PLPD.
Our method prioritizes samples that are less likely to cause instability by combining high entropy with high gradient norms out of adaptation.
arXiv Detail & Related papers (2024-09-14T01:25:52Z) - Test-Time Model Adaptation with Only Forward Passes [68.11784295706995]
Test-time adaptation has proven effective in adapting a given trained model to unseen test samples with potential distribution shifts.
We propose a test-time Forward-Optimization Adaptation (FOA) method.
FOA runs on quantized 8-bit ViT, outperforms gradient-based TENT on full-precision 32-bit ViT, and achieves an up to 24-fold memory reduction on ImageNet-C.
arXiv Detail & Related papers (2024-04-02T05:34:33Z) - Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z) - Listen, Adapt, Better WER: Source-free Single-utterance Test-time
Adaptation for Automatic Speech Recognition [65.84978547406753]
Test-time Adaptation aims to adapt the model trained on source domains to yield better predictions for test samples.
Single-Utterance Test-time Adaptation (SUTA) is the first TTA study in speech area to our best knowledge.
arXiv Detail & Related papers (2022-03-27T06:38:39Z) - Tent: Fully Test-time Adaptation by Entropy Minimization [77.85911673550851]
A model must adapt itself to generalize to new and different data during testing.
In this setting of fully test-time adaptation the model has only the test data and its own parameters.
We propose to adapt by test entropy minimization (tent): we optimize the model for confidence as measured by the entropy of its predictions.
arXiv Detail & Related papers (2020-06-18T17:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.