Mirror Diffusion Models for Constrained and Watermarked Generation
- URL: http://arxiv.org/abs/2310.01236v2
- Date: Thu, 29 Feb 2024 06:07:33 GMT
- Title: Mirror Diffusion Models for Constrained and Watermarked Generation
- Authors: Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou, Molei Tao
- Abstract summary: Mirror Diffusion Models (MDM) is a new class of diffusion models that generate data on convex constrained sets without losing tractability.
For safety and privacy purposes, we also explore constrained sets as a new mechanism to embed invisible but quantitative information in generated data.
Our work brings new algorithmic opportunities for learning tractable diffusion on complex domains.
- Score: 41.27274841596343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern successes of diffusion models in learning complex, high-dimensional
data distributions are attributed, in part, to their capability to construct
diffusion processes with analytic transition kernels and score functions. The
tractability results in a simulation-free framework with stable regression
losses, from which reversed, generative processes can be learned at scale.
However, when data is confined to a constrained set as opposed to a standard
Euclidean space, these desirable characteristics appear to be lost based on
prior attempts. In this work, we propose Mirror Diffusion Models (MDM), a new
class of diffusion models that generate data on convex constrained sets without
losing any tractability. This is achieved by learning diffusion processes in a
dual space constructed from a mirror map, which, crucially, is a standard
Euclidean space. We derive efficient computation of mirror maps for popular
constrained sets, such as simplices and $\ell_2$-balls, showing significantly
improved performance of MDM over existing methods. For safety and privacy
purposes, we also explore constrained sets as a new mechanism to embed
invisible but quantitative information (i.e., watermarks) in generated data,
for which MDM serves as a compelling approach. Our work brings new algorithmic
opportunities for learning tractable diffusion on complex domains. Our code is
available at https://github.com/ghliu/mdm
Related papers
- Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding [84.3224556294803]
Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences.
We aim to optimize downstream reward functions while preserving the naturalness of these design spaces.
Our algorithm integrates soft value functions, which looks ahead to how intermediate noisy states lead to high rewards in the future.
arXiv Detail & Related papers (2024-08-15T16:47:59Z) - Neural Approximate Mirror Maps for Constrained Diffusion Models [6.776705170481944]
Diffusion models excel at creating visually-convincing images, but they often struggle to meet subtle constraints inherent in the training data.
We propose neural approximate mirror maps (NAMMs) for general constraints.
A generative model, such as an MDM, can then be trained in the learned mirror space and its samples restored to the constraint set by the inverse map.
arXiv Detail & Related papers (2024-06-18T17:36:09Z) - Amortizing intractable inference in diffusion models for vision, language, and control [89.65631572949702]
This paper studies amortized sampling of the posterior over data, $mathbfxsim prm post(mathbfx)propto p(mathbfx)r(mathbfx)$, in a model that consists of a diffusion generative model prior $p(mathbfx)$ and a black-box constraint or function $r(mathbfx)$.
We prove the correctness of a data-free learning objective, relative trajectory balance, for training a diffusion model that samples from
arXiv Detail & Related papers (2024-05-31T16:18:46Z) - Unsupervised Discovery of Interpretable Directions in h-space of
Pre-trained Diffusion Models [63.1637853118899]
We propose the first unsupervised and learning-based method to identify interpretable directions in h-space of pre-trained diffusion models.
We employ a shift control module that works on h-space of pre-trained diffusion models to manipulate a sample into a shifted version of itself.
By jointly optimizing them, the model will spontaneously discover disentangled and interpretable directions.
arXiv Detail & Related papers (2023-10-15T18:44:30Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Stochastic Mirror Descent in Average Ensemble Models [38.38572705720122]
The mirror descent (SMD) is a general class of training algorithms, which includes the celebrated gradient descent (SGD) as a special case.
In this paper we explore the performance of the mirror potential algorithm on mean-field ensemble models.
arXiv Detail & Related papers (2022-10-27T11:04:00Z) - Autoregressive Diffusion Models [34.125045462636386]
We introduce Autoregressive Diffusion Models (ARDMs), a model class encompassing and generalizing order-agnostic autoregressive models.
ARDMs are simple to implement and easy to train, and can be trained using an efficient objective similar to modern probabilistic diffusion models.
We show that ARDMs obtain compelling results not only on complete datasets, but also on compressing single data points.
arXiv Detail & Related papers (2021-10-05T13:36:55Z) - Structured Denoising Diffusion Models in Discrete State-Spaces [15.488176444698404]
We introduce Discrete Denoising Diffusion Probabilistic Models (D3PMs) for discrete data.
The choice of transition matrix is an important design decision that leads to improved results in image and text domains.
For text, this model class achieves strong results on character-level text generation while scaling to large vocabularies on LM1B.
arXiv Detail & Related papers (2021-07-07T04:11:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.