Mitigating the Bias in the Model for Continual Test-Time Adaptation
- URL: http://arxiv.org/abs/2403.01344v1
- Date: Sat, 2 Mar 2024 23:37:16 GMT
- Title: Mitigating the Bias in the Model for Continual Test-Time Adaptation
- Authors: Inseop Chung, Kyomin Hwang, Jayeon Yoo, Nojun Kwak
- Abstract summary: Continual Test-Time Adaptation (CTA) is a challenging task that aims to adapt a source pre-trained model to continually changing target domains.
We find that a model shows highly biased predictions as it constantly adapts to the chaining distribution of the target data.
This paper mitigates this issue to improve performance in the CTA scenario.
- Score: 32.33057968481597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual Test-Time Adaptation (CTA) is a challenging task that aims to adapt
a source pre-trained model to continually changing target domains. In the CTA
setting, a model does not know when the target domain changes, thus facing a
drastic change in the distribution of streaming inputs during the test-time.
The key challenge is to keep adapting the model to the continually changing
target domains in an online manner. We find that a model shows highly biased
predictions as it constantly adapts to the chaining distribution of the target
data. It predicts certain classes more often than other classes, making
inaccurate over-confident predictions. This paper mitigates this issue to
improve performance in the CTA scenario. To alleviate the bias issue, we make
class-wise exponential moving average target prototypes with reliable target
samples and exploit them to cluster the target features class-wisely. Moreover,
we aim to align the target distributions to the source distribution by
anchoring the target feature to its corresponding source prototype. With
extensive experiments, our proposed method achieves noteworthy performance gain
when applied on top of existing CTA methods without substantial adaptation time
overhead.
Related papers
- Source-Free Test-Time Adaptation For Online Surface-Defect Detection [29.69030283193086]
We propose a novel test-time adaptation surface-defect detection approach.
It adapts pre-trained models to new domains and classes during inference.
Experiments demonstrate it outperforms state-of-the-art techniques.
arXiv Detail & Related papers (2024-08-18T14:24:05Z) - Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - Turn Down the Noise: Leveraging Diffusion Models for Test-time
Adaptation via Pseudo-label Ensembling [2.5437028043490084]
The goal of test-time adaptation is to adapt a source-pretrained model to a continuously changing target domain without relying on any source data.
We introduce an approach that leverages a pre-trained diffusion model to project the target domain images closer to the source domain.
arXiv Detail & Related papers (2023-11-29T20:35:32Z) - Distributionally Robust Post-hoc Classifiers under Prior Shifts [31.237674771958165]
We investigate the problem of training models that are robust to shifts caused by changes in the distribution of class-priors or group-priors.
We present an extremely lightweight post-hoc approach that performs scaling adjustments to predictions from a pre-trained model.
arXiv Detail & Related papers (2023-09-16T00:54:57Z) - Continual Source-Free Unsupervised Domain Adaptation [37.060694803551534]
Existing Source-free Unsupervised Domain Adaptation approaches exhibit catastrophic forgetting.
We propose a Continual SUDA (C-SUDA) framework to cope with the challenge of SUDA in a continual learning setting.
arXiv Detail & Related papers (2023-04-14T20:11:05Z) - Uncertainty-guided Source-free Domain Adaptation [77.3844160723014]
Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model.
We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation.
arXiv Detail & Related papers (2022-08-16T08:03:30Z) - Improving Test-Time Adaptation via Shift-agnostic Weight Regularization
and Nearest Source Prototypes [18.140619966865955]
We propose a novel test-time adaptation strategy that adjusts the model pre-trained on the source domain using only unlabeled online data from the target domain.
We show that our method exhibits state-of-the-art performance on various standard benchmarks and even outperforms its supervised counterpart.
arXiv Detail & Related papers (2022-07-24T10:17:05Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Continual Test-Time Domain Adaptation [94.51284735268597]
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain without using any source data.
CoTTA is easy to implement and can be readily incorporated in off-the-shelf pre-trained models.
arXiv Detail & Related papers (2022-03-25T11:42:02Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - On-target Adaptation [82.77980951331854]
Domain adaptation seeks to mitigate the shift between training on the emphsource domain and testing on the emphtarget domain.
Most adaptation methods rely on the source data by joint optimization over source data and target data.
We show significant improvement by on-target adaptation, which learns the representation purely from target data.
arXiv Detail & Related papers (2021-09-02T17:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.