Tent: Fully Test-time Adaptation by Entropy Minimization
- URL: http://arxiv.org/abs/2006.10726v3
- Date: Thu, 18 Mar 2021 17:58:01 GMT
- Title: Tent: Fully Test-time Adaptation by Entropy Minimization
- Authors: Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, Trevor
Darrell
- Abstract summary: A model must adapt itself to generalize to new and different data during testing.
In this setting of fully test-time adaptation the model has only the test data and its own parameters.
We propose to adapt by test entropy minimization (tent): we optimize the model for confidence as measured by the entropy of its predictions.
- Score: 77.85911673550851
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A model must adapt itself to generalize to new and different data during
testing. In this setting of fully test-time adaptation the model has only the
test data and its own parameters. We propose to adapt by test entropy
minimization (tent): we optimize the model for confidence as measured by the
entropy of its predictions. Our method estimates normalization statistics and
optimizes channel-wise affine transformations to update online on each batch.
Tent reduces generalization error for image classification on corrupted
ImageNet and CIFAR-10/100 and reaches a new state-of-the-art error on
ImageNet-C. Tent handles source-free domain adaptation on digit recognition
from SVHN to MNIST/MNIST-M/USPS, on semantic segmentation from GTA to
Cityscapes, and on the VisDA-C benchmark. These results are achieved in one
epoch of test-time optimization without altering training.
Related papers
- COME: Test-time adaption by Conservatively Minimizing Entropy [45.689829178140634]
Conservatively Minimize the Entropy (COME) is a drop-in replacement of traditional entropy (EM)
COME explicitly models the uncertainty by characterizing a Dirichlet prior distribution over model predictions.
We show that COME achieves state-of-the-art performance on commonly used benchmarks.
arXiv Detail & Related papers (2024-10-12T09:20:06Z) - The Entropy Enigma: Success and Failure of Entropy Minimization [30.083332640328642]
Entropy minimization (EM) is frequently used to increase the accuracy of classification models when they're faced with new data at test time.
We analyze why EM works when adapting a model for a few steps and why it eventually fails after adapting for many steps.
We present a method for solving a practical problem: estimating a model's accuracy on a given arbitrary dataset without having access to its labels.
arXiv Detail & Related papers (2024-05-08T12:26:15Z) - Test-Time Model Adaptation with Only Forward Passes [68.11784295706995]
Test-time adaptation has proven effective in adapting a given trained model to unseen test samples with potential distribution shifts.
We propose a test-time Forward-Optimization Adaptation (FOA) method.
FOA runs on quantized 8-bit ViT, outperforms gradient-based TENT on full-precision 32-bit ViT, and achieves an up to 24-fold memory reduction on ImageNet-C.
arXiv Detail & Related papers (2024-04-02T05:34:33Z) - REALM: Robust Entropy Adaptive Loss Minimization for Improved
Single-Sample Test-Time Adaptation [5.749155230209001]
Fully-test-time adaptation (F-TTA) can mitigate performance loss due to distribution shifts between train and test data.
We present a general framework for improving robustness of F-TTA to noisy samples, inspired by self-paced learning and robust loss functions.
arXiv Detail & Related papers (2023-09-07T18:44:58Z) - Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language
Models [107.05966685291067]
We propose test-time prompt tuning (TPT) to learn adaptive prompts on the fly with a single test sample.
TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average.
In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data.
arXiv Detail & Related papers (2022-09-15T17:55:11Z) - Sample-dependent Adaptive Temperature Scaling for Improved Calibration [95.7477042886242]
Post-hoc approach to compensate for neural networks being wrong is to perform temperature scaling.
We propose to predict a different temperature value for each input, allowing us to adjust the mismatch between confidence and accuracy.
We test our method on the ResNet50 and WideResNet28-10 architectures using the CIFAR10/100 and Tiny-ImageNet datasets.
arXiv Detail & Related papers (2022-07-13T14:13:49Z) - On-the-Fly Test-time Adaptation for Medical Image Segmentation [63.476899335138164]
Adapting the source model to target data distribution at test-time is an efficient solution for the data-shift problem.
We propose a new framework called Adaptive UNet where each convolutional block is equipped with an adaptive batch normalization layer.
During test-time, the model takes in just the new test image and generates a domain code to adapt the features of source model according to the test data.
arXiv Detail & Related papers (2022-03-10T18:51:29Z) - Test-time Batch Statistics Calibration for Covariate Shift [66.7044675981449]
We propose to adapt the deep models to the novel environment during inference.
We present a general formulation $alpha$-BN to calibrate the batch statistics.
We also present a novel loss function to form a unified test time adaptation framework Core.
arXiv Detail & Related papers (2021-10-06T08:45:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.