Iterative Importance Fine-tuning of Diffusion Models
- URL: http://arxiv.org/abs/2502.04468v1
- Date: Thu, 06 Feb 2025 19:37:15 GMT
- Title: Iterative Importance Fine-tuning of Diffusion Models
- Authors: Alexander Denker, Shreyas Padhy, Francisco Vargas, Johannes Hertrich,
- Abstract summary: This work introduces a self-supervised algorithm for fine-tuning diffusion models by estimating the $h$-transform.
We demonstrate the effectiveness of this framework on class-conditional sampling and reward fine-tuning for text-to-image diffusion models.
- Score: 44.67216380548778
- License:
- Abstract: Diffusion models are an important tool for generative modelling, serving as effective priors in applications such as imaging and protein design. A key challenge in applying diffusion models for downstream tasks is efficiently sampling from resulting posterior distributions, which can be addressed using the $h$-transform. This work introduces a self-supervised algorithm for fine-tuning diffusion models by estimating the $h$-transform, enabling amortised conditional sampling. Our method iteratively refines the $h$-transform using a synthetic dataset resampled with path-based importance weights. We demonstrate the effectiveness of this framework on class-conditional sampling and reward fine-tuning for text-to-image diffusion models.
Related papers
- Generative diffusion model with inverse renormalization group flows [0.0]
Diffusion models produce data by denoising a sample corrupted by white noise.
We introduce a renormalization group-based diffusion model that leverages multiscale nature of data distributions.
We validate the versatility of the model through applications to protein structure prediction and image generation.
arXiv Detail & Related papers (2025-01-15T19:00:01Z) - Accelerated Diffusion Models via Speculative Sampling [89.43940130493233]
Speculative sampling is a popular technique for accelerating inference in Large Language Models.
We extend speculative sampling to diffusion models, which generate samples via continuous, vector-valued Markov chains.
We propose various drafting strategies, including a simple and effective approach that does not require training a draft model.
arXiv Detail & Related papers (2025-01-09T16:50:16Z) - Non-Normal Diffusion Models [3.5534933448684134]
Diffusion models generate samples by incrementally reversing a process that turns data into noise.
We show that when the step size goes to zero, the reversed process is invariant to the distribution of these increments.
We demonstrate the effectiveness of these models on density estimation and generative modeling tasks on standard image datasets.
arXiv Detail & Related papers (2024-12-10T21:31:12Z) - Efficient Diversity-Preserving Diffusion Alignment via Gradient-Informed GFlowNets [65.42834731617226]
Nabla-GFlowNet is the first GFlowNet method that leverages the rich signal in reward gradients.
We show that our proposed method achieves fast yet diversity- and prior-preserving alignment of Stable Diffusion.
arXiv Detail & Related papers (2024-12-10T18:59:58Z) - Informed Correctors for Discrete Diffusion Models [32.87362154118195]
We propose a family of informed correctors that more reliably counteracts discretization error by leveraging information learned by the model.
We also propose $k$-Gillespie's, a sampling algorithm that better utilizes each model evaluation, while still enjoying the speed and flexibility of $tau$-leaping.
Across several real and synthetic datasets, we show that $k$-Gillespie's with informed correctors reliably produces higher quality samples at lower computational cost.
arXiv Detail & Related papers (2024-07-30T23:29:29Z) - Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized
Control [54.132297393662654]
Diffusion models excel at capturing complex data distributions, such as those of natural images and proteins.
While diffusion models are trained to represent the distribution in the training dataset, we often are more concerned with other properties, such as the aesthetic quality of the generated images.
We present theoretical and empirical evidence that demonstrates our framework is capable of efficiently generating diverse samples with high genuine rewards.
arXiv Detail & Related papers (2024-02-23T08:54:42Z) - Towards Controllable Diffusion Models via Reward-Guided Exploration [15.857464051475294]
We propose a novel framework that guides the training-phase of diffusion models via reinforcement learning (RL)
RL enables calculating policy gradients via samples from a pay-off distribution proportional to exponential scaled rewards, rather than from policies themselves.
Experiments on 3D shape and molecule generation tasks show significant improvements over existing conditional diffusion models.
arXiv Detail & Related papers (2023-04-14T13:51:26Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - How Much is Enough? A Study on Diffusion Times in Score-based Generative
Models [76.76860707897413]
Current best practice advocates for a large T to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution.
We show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process.
arXiv Detail & Related papers (2022-06-10T15:09:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.