Persistent Test-time Adaptation in Episodic Testing Scenarios
- URL: http://arxiv.org/abs/2311.18193v2
- Date: Tue, 16 Jan 2024 14:16:21 GMT
- Title: Persistent Test-time Adaptation in Episodic Testing Scenarios
- Authors: Trung-Hieu Hoang, Duc Minh Vo, Minh N. Do
- Abstract summary: Current test-time adaptation approaches aim to adapt to environments that change continuously.
It is unclear whether the adaptability of these methods is sustained after a long run.
This study proposes a novel testing setting called episodic TTA.
- Score: 13.514033978964308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current test-time adaptation (TTA) approaches aim to adapt to environments
that change continuously. Yet, when the environments not only change but also
recur in a correlated manner over time, such as in the case of day-night
surveillance cameras, it is unclear whether the adaptability of these methods
is sustained after a long run. This study aims to examine the error
accumulation of TTA models when they are repeatedly exposed to previous testing
environments, proposing a novel testing setting called episodic TTA. To study
this phenomenon, we design a simulation of TTA process on a simple yet
representative $\epsilon$-perturbed Gaussian Mixture Model Classifier and
derive the theoretical findings revealing the dataset- and algorithm-dependent
factors that contribute to the gradual degeneration of TTA methods through
time. Our investigation has led us to propose a method, named persistent TTA
(PeTTA). PeTTA senses the model divergence towards a collapsing and adjusts the
adaptation strategy of TTA, striking a balance between two primary objectives:
adaptation and preventing model collapse. The stability of PeTTA in the face of
episodic TTA scenarios has been demonstrated through a set of comprehensive
experiments on various benchmarks.
Related papers
- Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - Entropy is not Enough for Test-Time Adaptation: From the Perspective of
Disentangled Factors [36.54076844195179]
Test-time adaptation (TTA) fine-tunes pre-trained deep neural networks for unseen test data.
We introduce a novel TTA method named Destroy Your Object (DeYO)
arXiv Detail & Related papers (2024-03-12T07:01:57Z) - RDumb: A simple approach that questions our progress in continual test-time adaptation [12.374649969346441]
Test-Time Adaptation (TTA) allows to update pre-trained models to changing data distributions at deployment time.
Recent work proposed and applied methods for continual adaptation over long timescales.
We find that eventually all but one state-of-the-art methods collapse and perform worse than a non-adapting model.
arXiv Detail & Related papers (2023-06-08T17:52:34Z) - Test-Time Adaptation with Perturbation Consistency Learning [32.58879780726279]
We propose a simple test-time adaptation method to promote the model to make stable predictions for samples with distribution shifts.
Our method can achieve higher or comparable performance with less inference time over strong PLM backbones.
arXiv Detail & Related papers (2023-04-25T12:29:22Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Towards Stable Test-Time Adaptation in Dynamic Wild World [60.98073673220025]
Test-time adaptation (TTA) has shown to be effective at tackling distribution shifts between training and testing data by adapting a given model on test samples.
Online model updating of TTA may be unstable and this is often a key obstacle preventing existing TTA methods from being deployed in the real world.
arXiv Detail & Related papers (2023-02-24T02:03:41Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - Robust Continual Test-time Adaptation: Instance-aware BN and
Prediction-balanced Memory [58.72445309519892]
We present a new test-time adaptation scheme that is robust against non-i.i.d. test data streams.
Our novelty is mainly two-fold: (a) Instance-Aware Batch Normalization (IABN) that corrects normalization for out-of-distribution samples, and (b) Prediction-balanced Reservoir Sampling (PBRS) that simulates i.i.d. data stream from non-i.i.d. stream in a class-balanced manner.
arXiv Detail & Related papers (2022-08-10T03:05:46Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.