Diffusion-TTA: Test-time Adaptation of Discriminative Models via
Generative Feedback
- URL: http://arxiv.org/abs/2311.16102v2
- Date: Wed, 29 Nov 2023 20:12:28 GMT
- Title: Diffusion-TTA: Test-time Adaptation of Discriminative Models via
Generative Feedback
- Authors: Mihir Prabhudesai and Tsung-Wei Ke and Alexander C. Li and Deepak
Pathak and Katerina Fragkiadaki
- Abstract summary: generative models can be great test-time adapters for discriminative models.
Our method, Diffusion-TTA, adapts pre-trained discriminative models to each unlabelled example in the test set.
We show Diffusion-TTA significantly enhances the accuracy of various large-scale pre-trained discriminative models.
- Score: 97.0874638345205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advancements in generative modeling, particularly the advent of diffusion
models, have sparked a fundamental question: how can these models be
effectively used for discriminative tasks? In this work, we find that
generative models can be great test-time adapters for discriminative models.
Our method, Diffusion-TTA, adapts pre-trained discriminative models such as
image classifiers, segmenters and depth predictors, to each unlabelled example
in the test set using generative feedback from a diffusion model. We achieve
this by modulating the conditioning of the diffusion model using the output of
the discriminative model. We then maximize the image likelihood objective by
backpropagating the gradients to discriminative model's parameters. We show
Diffusion-TTA significantly enhances the accuracy of various large-scale
pre-trained discriminative models, such as, ImageNet classifiers, CLIP models,
image pixel labellers and image depth predictors. Diffusion-TTA outperforms
existing test-time adaptation methods, including TTT-MAE and TENT, and
particularly shines in online adaptation setups, where the discriminative model
is continually adapted to each example in the test set. We provide access to
code, results, and visualizations on our website:
https://diffusion-tta.github.io/.
Related papers
- Model Integrity when Unlearning with T2I Diffusion Models [11.321968363411145]
We propose approximate Machine Unlearning algorithms to reduce the generation of specific types of images, characterized by samples from a forget distribution''
We then propose unlearning algorithms that demonstrate superior effectiveness in preserving model integrity compared to existing baselines.
arXiv Detail & Related papers (2024-11-04T13:15:28Z) - Diffusion Model Driven Test-Time Image Adaptation for Robust Skin Lesion Classification [24.08402880603475]
We propose a test-time image adaptation method to enhance the accuracy of the model on test data.
We modify the target test images by projecting them back to the source domain using a diffusion model.
Our method makes the robustness more robust across various corruptions, architectures, and data regimes.
arXiv Detail & Related papers (2024-05-18T13:28:51Z) - Guided Diffusion from Self-Supervised Diffusion Features [49.78673164423208]
Guidance serves as a key concept in diffusion models, yet its effectiveness is often limited by the need for extra data annotation or pretraining.
We propose a framework to extract guidance from, and specifically for, diffusion models.
arXiv Detail & Related papers (2023-12-14T11:19:11Z) - Discriminator Guidance for Autoregressive Diffusion Models [12.139222986297264]
We introduce discriminator guidance in the setting of Autoregressive Diffusion Models.
We derive ways of using a discriminator together with a pretrained generative model in the discrete case.
arXiv Detail & Related papers (2023-10-24T13:14:22Z) - Diffusion Models Beat GANs on Image Classification [37.70821298392606]
Diffusion models have risen to prominence as a state-of-the-art method for image generation, denoising, inpainting, super-resolution, manipulation, etc.
We present our findings that these embeddings are useful beyond the noise prediction task, as they contain discriminative information and can also be leveraged for classification.
We find that with careful feature selection and pooling, diffusion models outperform comparable generative-discriminative methods for classification tasks.
arXiv Detail & Related papers (2023-07-17T17:59:40Z) - Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language Learners [88.07317175639226]
We propose a novel approach, Discriminative Stable Diffusion (DSD), which turns pre-trained text-to-image diffusion models into few-shot discriminative learners.
Our approach mainly uses the cross-attention score of a Stable Diffusion model to capture the mutual influence between visual and textual information.
arXiv Detail & Related papers (2023-05-18T05:41:36Z) - Your Diffusion Model is Secretly a Zero-Shot Classifier [90.40799216880342]
We show that density estimates from large-scale text-to-image diffusion models can be leveraged to perform zero-shot classification.
Our generative approach to classification attains strong results on a variety of benchmarks.
Our results are a step toward using generative over discriminative models for downstream tasks.
arXiv Detail & Related papers (2023-03-28T17:59:56Z) - Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC [102.64648158034568]
diffusion models have quickly become the prevailing approach to generative modeling in many domains.
We propose an energy-based parameterization of diffusion models which enables the use of new compositional operators.
We find these samplers lead to notable improvements in compositional generation across a wide set of problems.
arXiv Detail & Related papers (2023-02-22T18:48:46Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.