Reflected Schr\"odinger Bridge for Constrained Generative Modeling
- URL: http://arxiv.org/abs/2401.03228v1
- Date: Sat, 6 Jan 2024 14:39:58 GMT
- Title: Reflected Schr\"odinger Bridge for Constrained Generative Modeling
- Authors: Wei Deng, Yu Chen, Nicole Tianjiao Yang, Hengrong Du, Qi Feng, Ricky
T. Q. Chen
- Abstract summary: Reflected diffusion models have become the go-to method for large-scale generative models in real-world applications.
We introduce the Reflected Schrodinger Bridge algorithm: an entropy-regularized optimal transport approach tailored generating data within diverse bounded domains.
Our algorithm yields robust generative modeling in diverse domains, and its scalability is demonstrated in real-world constrained generative modeling through standard image benchmarks.
- Score: 16.72888494254555
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models have become the go-to method for large-scale generative
models in real-world applications. These applications often involve data
distributions confined within bounded domains, typically requiring ad-hoc
thresholding techniques for boundary enforcement. Reflected diffusion models
(Lou23) aim to enhance generalizability by generating the data distribution
through a backward process governed by reflected Brownian motion. However,
reflected diffusion models may not easily adapt to diverse domains without the
derivation of proper diffeomorphic mappings and do not guarantee optimal
transport properties. To overcome these limitations, we introduce the Reflected
Schrodinger Bridge algorithm: an entropy-regularized optimal transport approach
tailored for generating data within diverse bounded domains. We derive elegant
reflected forward-backward stochastic differential equations with Neumann and
Robin boundary conditions, extend divergence-based likelihood training to
bounded domains, and explore natural connections to entropic optimal transport
for the study of approximate linear convergence - a valuable insight for
practical training. Our algorithm yields robust generative modeling in diverse
domains, and its scalability is demonstrated in real-world constrained
generative modeling through standard image benchmarks.
Related papers
- Boundless Across Domains: A New Paradigm of Adaptive Feature and Cross-Attention for Domain Generalization in Medical Image Segmentation [1.93061220186624]
Domain-invariant representation learning is a powerful method for domain generalization.
Previous approaches face challenges such as high computational demands, training instability, and limited effectiveness with high-dimensional data.
We propose an Adaptive Feature Blending (AFB) method that generates out-of-distribution samples while exploring the in-distribution space.
arXiv Detail & Related papers (2024-11-22T12:06:24Z) - Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding [84.3224556294803]
Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences.
We aim to optimize downstream reward functions while preserving the naturalness of these design spaces.
Our algorithm integrates soft value functions, which looks ahead to how intermediate noisy states lead to high rewards in the future.
arXiv Detail & Related papers (2024-08-15T16:47:59Z) - Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models [54.132297393662654]
We introduce a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL.
We demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models.
arXiv Detail & Related papers (2024-05-30T03:57:29Z) - Unveil Conditional Diffusion Models with Classifier-free Guidance: A Sharp Statistical Theory [87.00653989457834]
Conditional diffusion models serve as the foundation of modern image synthesis and find extensive application in fields like computational biology and reinforcement learning.
Despite the empirical success, theory of conditional diffusion models is largely missing.
This paper bridges the gap by presenting a sharp statistical theory of distribution estimation using conditional diffusion models.
arXiv Detail & Related papers (2024-03-18T17:08:24Z) - Metropolis Sampling for Constrained Diffusion Models [11.488860260925504]
Denoising diffusion models have recently emerged as the predominant paradigm for generative modelling on image domains.
We introduce an alternative, simple noretisation scheme based on the reflected Brownian motion.
arXiv Detail & Related papers (2023-07-11T17:05:23Z) - A prior regularized full waveform inversion using generative diffusion
models [0.5156484100374059]
Full waveform inversion (FWI) has the potential to provide high-resolution subsurface model estimations.
Due to limitations in observation, e.g., regional noise, limited shots or receivers, and band-limited data, it is hard to obtain the desired high-resolution model with FWI.
We propose a new paradigm for FWI regularized by generative diffusion models.
arXiv Detail & Related papers (2023-06-22T10:10:34Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Infinite-Dimensional Diffusion Models [4.342241136871849]
We formulate diffusion-based generative models in infinite dimensions and apply them to the generative modeling of functions.
We show that our formulations are well posed in the infinite-dimensional setting and provide dimension-independent distance bounds from the sample to the target measure.
We also develop guidelines for the design of infinite-dimensional diffusion models.
arXiv Detail & Related papers (2023-02-20T18:00:38Z) - Let us Build Bridges: Understanding and Extending Diffusion Generative
Models [19.517597928769042]
Diffusion-based generative models have achieved promising results recently, but raise an array of open questions.
This work tries to re-exam the overall framework in order to gain better theoretical understandings.
We present 1) a first theoretical error analysis for learning diffusion generation models, and 2) a simple and unified approach to learning on data from different discrete and constrained domains.
arXiv Detail & Related papers (2022-08-31T08:58:10Z) - Super-model ecosystem: A domain-adaptation perspective [101.76769818069072]
This paper attempts to establish the theoretical foundation for the emerging super-model paradigm via domain adaptation.
Super-model paradigms help reduce computational and data cost and carbon emission, which is critical to AI industry.
arXiv Detail & Related papers (2022-08-30T09:09:43Z) - Haar Wavelet based Block Autoregressive Flows for Trajectories [129.37479472754083]
Prediction of trajectories such as that of pedestrians is crucial to the performance of autonomous agents.
We introduce a novel Haar wavelet based block autoregressive model leveraging split couplings.
We illustrate the advantages of our approach for generating diverse and accurate trajectories on two real-world datasets.
arXiv Detail & Related papers (2020-09-21T13:57:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.