Turn Down the Noise: Leveraging Diffusion Models for Test-time
Adaptation via Pseudo-label Ensembling
- URL: http://arxiv.org/abs/2311.18071v1
- Date: Wed, 29 Nov 2023 20:35:32 GMT
- Title: Turn Down the Noise: Leveraging Diffusion Models for Test-time
Adaptation via Pseudo-label Ensembling
- Authors: Mrigank Raman, Rohan Shah, Akash Kannan, Pranit Chawla
- Abstract summary: The goal of test-time adaptation is to adapt a source-pretrained model to a continuously changing target domain without relying on any source data.
We introduce an approach that leverages a pre-trained diffusion model to project the target domain images closer to the source domain.
- Score: 2.5437028043490084
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of test-time adaptation is to adapt a source-pretrained model to a
continuously changing target domain without relying on any source data.
Typically, this is either done by updating the parameters of the model (model
adaptation) using inputs from the target domain or by modifying the inputs
themselves (input adaptation). However, methods that modify the model suffer
from the issue of compounding noisy updates whereas methods that modify the
input need to adapt to every new data point from scratch while also struggling
with certain domain shifts. We introduce an approach that leverages a
pre-trained diffusion model to project the target domain images closer to the
source domain and iteratively updates the model via pseudo-label ensembling.
Our method combines the advantages of model and input adaptations while
mitigating their shortcomings. Our experiments on CIFAR-10C demonstrate the
superiority of our approach, outperforming the strongest baseline by an average
of 1.7% across 15 diverse corruptions and surpassing the strongest input
adaptation baseline by an average of 18%.
Related papers
- Mitigating the Bias in the Model for Continual Test-Time Adaptation [32.33057968481597]
Continual Test-Time Adaptation (CTA) is a challenging task that aims to adapt a source pre-trained model to continually changing target domains.
We find that a model shows highly biased predictions as it constantly adapts to the chaining distribution of the target data.
This paper mitigates this issue to improve performance in the CTA scenario.
arXiv Detail & Related papers (2024-03-02T23:37:16Z) - Target to Source: Guidance-Based Diffusion Model for Test-Time
Adaptation [8.695439655048634]
We propose a novel guidance-based diffusion-driven adaptation (GDDA) to overcome the data shift.
GDDA significantly performs better than the state-of-the-art baselines.
arXiv Detail & Related papers (2023-12-08T02:31:36Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Prior-guided Source-free Domain Adaptation for Human Pose Estimation [24.50953879583841]
Domain adaptation methods for 2D human pose estimation typically require continuous access to the source data.
We present Prior-guided Self-training (POST), a pseudo-labeling approach that builds on the popular Mean Teacher framework.
arXiv Detail & Related papers (2023-08-26T20:30:04Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Variational Model Perturbation for Source-Free Domain Adaptation [64.98560348412518]
We introduce perturbations into the model parameters by variational Bayesian inference in a probabilistic framework.
We demonstrate the theoretical connection to learning Bayesian neural networks, which proves the generalizability of the perturbed model to target domains.
arXiv Detail & Related papers (2022-10-19T08:41:19Z) - Improving Test-Time Adaptation via Shift-agnostic Weight Regularization
and Nearest Source Prototypes [18.140619966865955]
We propose a novel test-time adaptation strategy that adjusts the model pre-trained on the source domain using only unlabeled online data from the target domain.
We show that our method exhibits state-of-the-art performance on various standard benchmarks and even outperforms its supervised counterpart.
arXiv Detail & Related papers (2022-07-24T10:17:05Z) - Back to the Source: Diffusion-Driven Test-Time Adaptation [77.4229736436935]
Test-time adaptation harnesses test inputs to improve accuracy of a model trained on source data when tested on shifted target data.
We instead update the target data, by projecting all test inputs toward the source domain with a generative diffusion model.
arXiv Detail & Related papers (2022-07-07T17:14:10Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z) - Transformer-Based Source-Free Domain Adaptation [134.67078085569017]
We study the task of source-free domain adaptation (SFDA), where the source data are not available during target adaptation.
We propose a generic and effective framework based on Transformer, named TransDA, for learning a generalized model for SFDA.
arXiv Detail & Related papers (2021-05-28T23:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.