Back to the Source: Diffusion-Driven Test-Time Adaptation
- URL: http://arxiv.org/abs/2207.03442v2
- Date: Wed, 21 Jun 2023 16:57:19 GMT
- Title: Back to the Source: Diffusion-Driven Test-Time Adaptation
- Authors: Jin Gao, Jialing Zhang, Xihui Liu, Trevor Darrell, Evan Shelhamer,
Dequan Wang
- Abstract summary: Test-time adaptation harnesses test inputs to improve accuracy of a model trained on source data when tested on shifted target data.
We instead update the target data, by projecting all test inputs toward the source domain with a generative diffusion model.
- Score: 77.4229736436935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Test-time adaptation harnesses test inputs to improve the accuracy of a model
trained on source data when tested on shifted target data. Existing methods
update the source model by (re-)training on each target domain. While
effective, re-training is sensitive to the amount and order of the data and the
hyperparameters for optimization. We instead update the target data, by
projecting all test inputs toward the source domain with a generative diffusion
model. Our diffusion-driven adaptation method, DDA, shares its models for
classification and generation across all domains. Both models are trained on
the source domain, then fixed during testing. We augment diffusion with image
guidance and self-ensembling to automatically decide how much to adapt. Input
adaptation by DDA is more robust than prior model adaptation approaches across
a variety of corruptions, architectures, and data regimes on the ImageNet-C
benchmark. With its input-wise updates, DDA succeeds where model adaptation
degrades on too little data in small batches, dependent data in non-uniform
order, or mixed data with multiple corruptions.
Related papers
- Distribution Alignment for Fully Test-Time Adaptation with Dynamic Online Data Streams [19.921480334048756]
Test-Time Adaptation (TTA) enables adaptation and inference in test data streams with domain shifts from the source.
We propose a novel Distribution Alignment loss for TTA.
We surpass existing methods in non-i.i.d. scenarios and maintain competitive performance under the ideal i.i.d. assumption.
arXiv Detail & Related papers (2024-07-16T19:33:23Z) - Target to Source: Guidance-Based Diffusion Model for Test-Time
Adaptation [8.695439655048634]
We propose a novel guidance-based diffusion-driven adaptation (GDDA) to overcome the data shift.
GDDA significantly performs better than the state-of-the-art baselines.
arXiv Detail & Related papers (2023-12-08T02:31:36Z) - Turn Down the Noise: Leveraging Diffusion Models for Test-time
Adaptation via Pseudo-label Ensembling [2.5437028043490084]
The goal of test-time adaptation is to adapt a source-pretrained model to a continuously changing target domain without relying on any source data.
We introduce an approach that leverages a pre-trained diffusion model to project the target domain images closer to the source domain.
arXiv Detail & Related papers (2023-11-29T20:35:32Z) - Domain Alignment Meets Fully Test-Time Adaptation [24.546705919244936]
A foundational requirement of a deployed ML model is to generalize to data drawn from a testing distribution that is different from training.
In this paper, we focus on a challenging variant of this problem, where access to the original source data is restricted.
We propose a new approach, CATTAn, that bridges UDA and FTTA, by relaxing the need to access entire source data.
arXiv Detail & Related papers (2022-07-09T03:17:19Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - On-the-Fly Test-time Adaptation for Medical Image Segmentation [63.476899335138164]
Adapting the source model to target data distribution at test-time is an efficient solution for the data-shift problem.
We propose a new framework called Adaptive UNet where each convolutional block is equipped with an adaptive batch normalization layer.
During test-time, the model takes in just the new test image and generates a domain code to adapt the features of source model according to the test data.
arXiv Detail & Related papers (2022-03-10T18:51:29Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.