Ranked Entropy Minimization for Continual Test-Time Adaptation
- URL: http://arxiv.org/abs/2505.16441v1
- Date: Thu, 22 May 2025 09:29:38 GMT
- Title: Ranked Entropy Minimization for Continual Test-Time Adaptation
- Authors: Jisu Han, Jaemin Na, Wonjun Hwang,
- Abstract summary: Test-time adaptation aims to adapt to realistic environments in an online manner by learning during test time.<n>Entropy minimization has emerged as a principal strategy for test-time adaptation due to its efficiency and adaptability.<n>We propose ranked entropy minimization to mitigate the stability problem of the entropy minimization method.
- Score: 7.5140668729696145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Test-time adaptation aims to adapt to realistic environments in an online manner by learning during test time. Entropy minimization has emerged as a principal strategy for test-time adaptation due to its efficiency and adaptability. Nevertheless, it remains underexplored in continual test-time adaptation, where stability is more important. We observe that the entropy minimization method often suffers from model collapse, where the model converges to predicting a single class for all images due to a trivial solution. We propose ranked entropy minimization to mitigate the stability problem of the entropy minimization method and extend its applicability to continuous scenarios. Our approach explicitly structures the prediction difficulty through a progressive masking strategy. Specifically, it gradually aligns the model's probability distributions across different levels of prediction difficulty while preserving the rank order of entropy. The proposed method is extensively evaluated across various benchmarks, demonstrating its effectiveness through empirical results. Our code is available at https://github.com/pilsHan/rem
Related papers
- Online Decision-Focused Learning [63.83903681295497]
Decision-focused learning (DFL) is an increasingly popular paradigm for training predictive models whose outputs are used in decision-making tasks.<n>We investigate DFL in dynamic environments where the objective function does not evolve over time.<n>We establish bounds on the expected dynamic regret, both when decision space is a simplex and when it is a general bounded convex polytope.
arXiv Detail & Related papers (2025-05-19T10:40:30Z) - COME: Test-time adaption by Conservatively Minimizing Entropy [45.689829178140634]
Conservatively Minimize the Entropy (COME) is a drop-in replacement of traditional entropy (EM)
COME explicitly models the uncertainty by characterizing a Dirichlet prior distribution over model predictions.
We show that COME achieves state-of-the-art performance on commonly used benchmarks.
arXiv Detail & Related papers (2024-10-12T09:20:06Z) - Meta-TTT: A Meta-learning Minimax Framework For Test-Time Training [5.9631503543049895]
Test-time domain adaptation is a challenging task that aims to adapt a pre-trained model to limited, unlabeled target data during inference.
This paper introduces a meta-learning minimax framework for test-time training on batch normalization layers.
arXiv Detail & Related papers (2024-10-02T16:16:05Z) - The Entropy Enigma: Success and Failure of Entropy Minimization [30.083332640328642]
Entropy minimization (EM) is frequently used to increase the accuracy of classification models when they're faced with new data at test time.
We analyze why EM works when adapting a model for a few steps and why it eventually fails after adapting for many steps.
We present a method for solving a practical problem: estimating a model's accuracy on a given arbitrary dataset without having access to its labels.
arXiv Detail & Related papers (2024-05-08T12:26:15Z) - Improved Online Conformal Prediction via Strongly Adaptive Online
Learning [86.4346936885507]
We develop new online conformal prediction methods that minimize the strongly adaptive regret.
We prove that our methods achieve near-optimal strongly adaptive regret for all interval lengths simultaneously.
Experiments show that our methods consistently obtain better coverage and smaller prediction sets than existing methods on real-world tasks.
arXiv Detail & Related papers (2023-02-15T18:59:30Z) - Continual Test-Time Domain Adaptation [94.51284735268597]
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain without using any source data.
CoTTA is easy to implement and can be readily incorporated in off-the-shelf pre-trained models.
arXiv Detail & Related papers (2022-03-25T11:42:02Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - Being Patient and Persistent: Optimizing An Early Stopping Strategy for
Deep Learning in Profiled Attacks [2.7748013252318504]
We propose an early stopping algorithm that reliably recognizes the model's optimal state during training.
We formalize two conditions, persistence and patience, for a deep learning model to be optimal.
arXiv Detail & Related papers (2021-11-29T09:54:45Z) - Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via
Online High-Confidence Change-Point Detection [7.685002911021767]
We introduce an algorithm that efficiently learns policies in non-stationary environments.
It analyzes a possibly infinite stream of data and computes, in real-time, high-confidence change-point detection statistics.
We show that (i) this algorithm minimizes the delay until unforeseen changes to a context are detected, thereby allowing for rapid responses.
arXiv Detail & Related papers (2021-05-20T01:57:52Z) - Tent: Fully Test-time Adaptation by Entropy Minimization [77.85911673550851]
A model must adapt itself to generalize to new and different data during testing.
In this setting of fully test-time adaptation the model has only the test data and its own parameters.
We propose to adapt by test entropy minimization (tent): we optimize the model for confidence as measured by the entropy of its predictions.
arXiv Detail & Related papers (2020-06-18T17:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.