Discovery and Expansion of New Domains within Diffusion Models
- URL: http://arxiv.org/abs/2310.09213v2
- Date: Sun, 26 May 2024 20:17:35 GMT
- Title: Discovery and Expansion of New Domains within Diffusion Models
- Authors: Ye Zhu, Yu Wu, Duo Xu, Zhiwei Deng, Yan Yan, Olga Russakovsky,
- Abstract summary: We study the generalization properties of diffusion models in a fewshot setup.
We introduce a novel tuning-free paradigm to synthesize the target out-of-domain data.
- Score: 41.25905891327446
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this work, we study the generalization properties of diffusion models in a few-shot setup, introduce a novel tuning-free paradigm to synthesize the target out-of-domain (OOD) data, and demonstrate its advantages compared to existing methods in data-sparse scenarios with large domain gaps. Specifically, given a pre-trained model and a small set of images that are OOD relative to the model's training distribution, we explore whether the frozen model is able to generalize to this new domain. We begin by revealing that Denoising Diffusion Probabilistic Models (DDPMs) trained on single-domain images are already equipped with sufficient representation abilities to reconstruct arbitrary images from the inverted latent encoding following bi-directional deterministic diffusion and denoising trajectories. We then demonstrate through both theoretical and empirical perspectives that the OOD images establish Gaussian priors in latent spaces of the given model, and the inverted latent modes are separable from their initial training domain. We then introduce our novel tuning-free paradigm to synthesize new images of the target unseen domain by discovering qualified OOD latent encodings in the inverted noisy spaces. This is fundamentally different from the current paradigm that seeks to modify the denoising trajectory to achieve the same goal by tuning the model parameters. Extensive cross-model and domain experiments show that our proposed method can expand the latent space and generate unseen images via frozen DDPMs without impairing the quality of generation of their original domain. We also showcase a practical application of our proposed heuristic approach in dramatically different domains using astrophysical data, revealing the great potential of such a generalization paradigm in data spare fields such as scientific explorations.
Related papers
- Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction [88.65168366064061]
We introduce Discrete Denoising Posterior Prediction (DDPP), a novel framework that casts the task of steering pre-trained MDMs as a problem of probabilistic inference.
Our framework leads to a family of three novel objectives that are all simulation-free, and thus scalable.
We substantiate our designs via wet-lab validation, where we observe transient expression of reward-optimized protein sequences.
arXiv Detail & Related papers (2024-10-10T17:18:30Z) - FDS: Feedback-guided Domain Synthesis with Multi-Source Conditional Diffusion Models for Domain Generalization [19.0284321951354]
Domain Generalization techniques aim to enhance model robustness by simulating novel data distributions during training.
We propose FDS, Feedback-guided Domain Synthesis, a novel strategy that employs diffusion models to synthesize novel, pseudo-domains.
Our evaluations demonstrate that this methodology sets new benchmarks in domain generalization performance across a range of challenging datasets.
arXiv Detail & Related papers (2024-07-04T02:45:29Z) - Source-Free Domain Adaptation with Diffusion-Guided Source Data Generation [6.087274577167399]
This paper introduces a novel approach to leverage the generalizability of Diffusion Models for Source-Free Domain Adaptation (DM-SFDA)
Our proposed DMSFDA method involves fine-tuning a pre-trained text-to-image diffusion model to generate source domain images.
We validate our approach through comprehensive experiments across a range of datasets, including Office-31, Office-Home, and VisDA.
arXiv Detail & Related papers (2024-02-07T14:56:13Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.
We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - Cross Contrasting Feature Perturbation for Domain Generalization [11.863319505696184]
Domain generalization aims to learn a robust model from source domains that generalize well on unseen target domains.
Recent studies focus on generating novel domain samples or features to diversify distributions complementary to source domains.
We propose an online one-stage Cross Contrasting Feature Perturbation framework to simulate domain shift.
arXiv Detail & Related papers (2023-07-24T03:27:41Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion
Trajectories [144.03939123870416]
We propose a novel conditional diffusion model by introducing conditions into the forward process.
We use extra latent space to allocate an exclusive diffusion trajectory for each condition based on some shifting rules.
We formulate our method, which we call textbfShiftDDPMs, and provide a unified point of view on existing related methods.
arXiv Detail & Related papers (2023-02-05T12:48:21Z) - Universal Generative Modeling in Dual-domain for Dynamic MR Imaging [22.915796840971396]
We propose a k-space and image Du-al-Domain collaborative Universal Generative Model (DD-UGM) to reconstruct highly under-sampled measurements.
More precisely, we extract prior components from both image and k-space domains via a universal generative model and adaptively handle these prior components for faster processing.
arXiv Detail & Related papers (2022-12-15T03:04:48Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z) - Let us Build Bridges: Understanding and Extending Diffusion Generative
Models [19.517597928769042]
Diffusion-based generative models have achieved promising results recently, but raise an array of open questions.
This work tries to re-exam the overall framework in order to gain better theoretical understandings.
We present 1) a first theoretical error analysis for learning diffusion generation models, and 2) a simple and unified approach to learning on data from different discrete and constrained domains.
arXiv Detail & Related papers (2022-08-31T08:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.