A Probabilistic Framework for Lifelong Test-Time Adaptation
- URL: http://arxiv.org/abs/2212.09713v2
- Date: Tue, 4 Apr 2023 07:52:40 GMT
- Title: A Probabilistic Framework for Lifelong Test-Time Adaptation
- Authors: Dhanajit Brahma and Piyush Rai
- Abstract summary: Test-time adaptation (TTA) is the problem of updating a pre-trained source model at inference time given test input(s) from a different target domain.
We present PETAL (Probabilistic lifElong Test-time Adaptation with seLf-training prior), which solves lifelong TTA using a probabilistic approach.
Our method achieves better results than the current state-of-the-art for online lifelong test-time adaptation across various benchmarks.
- Score: 34.07074915005366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Test-time adaptation (TTA) is the problem of updating a pre-trained source
model at inference time given test input(s) from a different target domain.
Most existing TTA approaches assume the setting in which the target domain is
stationary, i.e., all the test inputs come from a single target domain.
However, in many practical settings, the test input distribution might exhibit
a lifelong/continual shift over time. Moreover, existing TTA approaches also
lack the ability to provide reliable uncertainty estimates, which is crucial
when distribution shifts occur between the source and target domain. To address
these issues, we present PETAL (Probabilistic lifElong Test-time Adaptation
with seLf-training prior), which solves lifelong TTA using a probabilistic
approach, and naturally results in (1) a student-teacher framework, where the
teacher model is an exponential moving average of the student model, and (2)
regularizing the model updates at inference time using the source model as a
regularizer. To prevent model drift in the lifelong/continual TTA setting, we
also propose a data-driven parameter restoration technique which contributes to
reducing the error accumulation and maintaining the knowledge of recent domains
by restoring only the irrelevant parameters. In terms of predictive error rate
as well as uncertainty based metrics such as Brier score and negative
log-likelihood, our method achieves better results than the current
state-of-the-art for online lifelong test-time adaptation across various
benchmarks, such as CIFAR-10C, CIFAR-100C, ImageNetC, and ImageNet3DCC
datasets. The source code for our approach is accessible at
https://github.com/dhanajitb/petal.
Related papers
- Mitigating the Bias in the Model for Continual Test-Time Adaptation [32.33057968481597]
Continual Test-Time Adaptation (CTA) is a challenging task that aims to adapt a source pre-trained model to continually changing target domains.
We find that a model shows highly biased predictions as it constantly adapts to the chaining distribution of the target data.
This paper mitigates this issue to improve performance in the CTA scenario.
arXiv Detail & Related papers (2024-03-02T23:37:16Z) - Universal Test-time Adaptation through Weight Ensembling, Diversity
Weighting, and Prior Correction [3.5139431332194198]
Test-time adaptation (TTA) continues to update the model after deployment, leveraging the current test data.
We identify and highlight several challenges a self-training based method has to deal with.
To prevent the model from becoming biased, we leverage a dataset and model-agnostic certainty and diversity weighting.
arXiv Detail & Related papers (2023-06-01T13:16:10Z) - Towards Stable Test-Time Adaptation in Dynamic Wild World [60.98073673220025]
Test-time adaptation (TTA) has shown to be effective at tackling distribution shifts between training and testing data by adapting a given model on test samples.
Online model updating of TTA may be unstable and this is often a key obstacle preventing existing TTA methods from being deployed in the real world.
arXiv Detail & Related papers (2023-02-24T02:03:41Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - Robust Mean Teacher for Continual and Gradual Test-Time Adaptation [5.744133015573047]
Gradual test-time adaptation (TTA) considers not only a single domain shift, but a sequence of shifts.
We propose and show that in the setting of TTA, the symmetric cross-entropy is better suited as a consistency loss for mean teachers.
We demonstrate the effectiveness of our proposed method 'robust mean teacher' (RMT) on the continual and gradual corruption benchmarks.
arXiv Detail & Related papers (2022-11-23T16:14:45Z) - TeST: Test-time Self-Training under Distribution Shift [99.68465267994783]
Test-Time Self-Training (TeST) is a technique that takes as input a model trained on some source data and a novel data distribution at test time.
We find that models adapted using TeST significantly improve over baseline test-time adaptation algorithms.
arXiv Detail & Related papers (2022-09-23T07:47:33Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z) - Continual Test-Time Domain Adaptation [94.51284735268597]
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain without using any source data.
CoTTA is easy to implement and can be readily incorporated in off-the-shelf pre-trained models.
arXiv Detail & Related papers (2022-03-25T11:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.