Conditional sampling within generative diffusion models
- URL: http://arxiv.org/abs/2409.09650v1
- Date: Sun, 15 Sep 2024 07:48:40 GMT
- Title: Conditional sampling within generative diffusion models
- Authors: Zheng Zhao, Ziwei Luo, Jens Sjölund, Thomas B. Schön,
- Abstract summary: We present a review of existing computational approaches to conditional sampling within generative diffusion models.
We highlight key methodologies that either utilise the joint distribution, or rely on (pre-trained) marginal distributions with explicit likelihoods.
- Score: 12.608803080528142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative diffusions are a powerful class of Monte Carlo samplers that leverage bridging Markov processes to approximate complex, high-dimensional distributions, such as those found in image processing and language models. Despite their success in these domains, an important open challenge remains: extending these techniques to sample from conditional distributions, as required in, for example, Bayesian inverse problems. In this paper, we present a comprehensive review of existing computational approaches to conditional sampling within generative diffusion models. Specifically, we highlight key methodologies that either utilise the joint distribution, or rely on (pre-trained) marginal distributions with explicit likelihoods, to construct conditional generative samplers.
Related papers
- Discrete Feynman-Kac Correctors [47.62319930071118]
We propose a framework that allows for controlling the generated distribution of discrete masked diffusion models at inference time.<n>We derive Sequential Monte Carlo (SMC) algorithms that, given a trained discrete diffusion model, control the temperature of the sampled distribution.<n>We illustrate the utility of our framework in several applications including: efficient sampling from the Boltzmann distribution of the Ising model, improving the performance of language models for code generation and amortized learning, as well as reward-tilted protein sequence generation.
arXiv Detail & Related papers (2026-01-15T13:55:38Z) - Briding Diffusion Posterior Sampling and Monte Carlo methods: a survey [36.0938529672647]
Diffusion models have demonstrated significant potential for solving Bayesian inverse problems by serving as priors.<n>This review offers a comprehensive overview of current methods that leverage emphpre-trained diffusion models alongside Monte Carlo methods.<n>We show that these methods employ a emphtwisting mechanism for the intermediate distributions within the diffusion process, guiding the simulations toward the posterior distribution.
arXiv Detail & Related papers (2025-10-15T21:36:51Z) - Reverse Markov Learning: Multi-Step Generative Models for Complex Distributions [10.165179181394755]
We extend engression to improve its capability in learning complex distributions.
We propose a framework that defines a general forward process transitioning from the target distribution to a known distribution.
This reverse process reconstructs the target distribution step by step.
arXiv Detail & Related papers (2025-02-19T14:10:15Z) - Debiasing Guidance for Discrete Diffusion with Sequential Monte Carlo [10.948453531321032]
We introduce a Sequential Monte Carlo algorithm that generates unbiasedly from a target distribution.
We validate our approach on low-dimensional distributions, controlled images and text generations.
arXiv Detail & Related papers (2025-02-10T00:27:54Z) - Accelerated Diffusion Models via Speculative Sampling [89.43940130493233]
Speculative sampling is a popular technique for accelerating inference in Large Language Models.
We extend speculative sampling to diffusion models, which generate samples via continuous, vector-valued Markov chains.
We propose various drafting strategies, including a simple and effective approach that does not require training a draft model.
arXiv Detail & Related papers (2025-01-09T16:50:16Z) - Learned Reference-based Diffusion Sampling for multi-modal distributions [2.1383136715042417]
We introduce Learned Reference-based Diffusion Sampler (LRDS), a methodology specifically designed to leverage prior knowledge on the location of the target modes.
LRDS proceeds in two steps by learning a reference diffusion model on samples located in high-density space regions.
We experimentally demonstrate that LRDS best exploits prior knowledge on the target distribution compared to competing algorithms on a variety of challenging distributions.
arXiv Detail & Related papers (2024-10-25T10:23:34Z) - Unified Convergence Analysis for Score-Based Diffusion Models with Deterministic Samplers [49.1574468325115]
We introduce a unified convergence analysis framework for deterministic samplers.
Our framework achieves iteration complexity of $tilde O(d2/epsilon)$.
We also provide a detailed analysis of Denoising Implicit Diffusion Models (DDIM)-type samplers.
arXiv Detail & Related papers (2024-10-18T07:37:36Z) - Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Unveil Conditional Diffusion Models with Classifier-free Guidance: A Sharp Statistical Theory [87.00653989457834]
Conditional diffusion models serve as the foundation of modern image synthesis and find extensive application in fields like computational biology and reinforcement learning.
Despite the empirical success, theory of conditional diffusion models is largely missing.
This paper bridges the gap by presenting a sharp statistical theory of distribution estimation using conditional diffusion models.
arXiv Detail & Related papers (2024-03-18T17:08:24Z) - An Efficient Quasi-Random Sampling for Copulas [3.400056739248712]
This paper proposes the use of generative models, such as Generative Adrial Networks (GANs), to generate quasi-random samples for any copula.
GANs are a type of implicit generative models used to learn the distribution of complex data, thus facilitating easy sampling.
arXiv Detail & Related papers (2024-03-08T13:01:09Z) - Gradient-Free Score-Based Sampling Methods with Ensembles [0.0]
We introduce ensembles within score-based sampling methods to develop gradient-free approximate sampling techniques.<n>We demonstrate the efficacy of the ensemble strategies through various examples.<n>Our findings highlight the potential of ensemble strategies for modeling complex probability distributions.
arXiv Detail & Related papers (2024-01-31T01:51:29Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Sampling, Diffusions, and Stochastic Localization [10.368585938419619]
Diffusions are a successful technique to sample from high-dimensional distributions.
localization is a technique to prove mixing of Markov Chains and other functional inequalities in high dimension.
An algorithmic version of localization was introduced in [EAMS2022] to obtain an algorithm that samples from certain statistical mechanics models.
arXiv Detail & Related papers (2023-05-18T04:01:40Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Sampling from Discrete Energy-Based Models with Quality/Efficiency
Trade-offs [3.491202838583993]
Energy-Based Models (EBMs) allow for extremely flexible specifications of probability distributions.
They do not provide a mechanism for obtaining exact samples from these distributions.
We propose a new approximate sampling technique, Quasi Rejection Sampling (QRS), that allows for a trade-off between sampling efficiency and sampling quality.
arXiv Detail & Related papers (2021-12-10T17:51:37Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.