Complex Preferences for Different Convergent Priors in Discrete Graph
Diffusion
- URL: http://arxiv.org/abs/2306.02957v2
- Date: Wed, 21 Jun 2023 23:09:58 GMT
- Title: Complex Preferences for Different Convergent Priors in Discrete Graph
Diffusion
- Authors: Alex M. Tseng, Nathaniel Diamant, Tommaso Biancalani, Gabriele Scalia
- Abstract summary: We develop a novel formulation of a family of discrete diffusion kernels which are easily adjustable to converge to different Bernoulli priors.
We show that the quality of generated graphs is sensitive to the prior used, and that the optimal choice cannot be explained by statistics or metrics.
- Score: 0.8602553195689513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models have achieved state-of-the-art performance in generating
many different kinds of data, including images, text, and videos. Despite their
success, there has been limited research on how the underlying diffusion
process and the final convergent prior can affect generative performance; this
research has also been limited to continuous data types and a score-based
diffusion framework. To fill this gap, we explore how different discrete
diffusion kernels (which converge to different prior distributions) affect the
performance of diffusion models for graphs. To this end, we developed a novel
formulation of a family of discrete diffusion kernels which are easily
adjustable to converge to different Bernoulli priors, and we study the effect
of these different kernels on generative performance. We show that the quality
of generated graphs is sensitive to the prior used, and that the optimal choice
cannot be explained by obvious statistics or metrics, which challenges the
intuitions which previous works have suggested.
Related papers
- Diffusion Attribution Score: Evaluating Training Data Influence in Diffusion Model [22.39558434131574]
Existing data attribution methods for diffusion models typically quantify the contribution of a training sample.
We argue that the direct usage of diffusion loss cannot represent such a contribution accurately due to the calculation of diffusion loss.
We aim to measure the direct comparison between predicted distributions with an attribution score to analyse the training sample importance.
arXiv Detail & Related papers (2024-10-24T10:58:17Z) - Text-to-Image Rectified Flow as Plug-and-Play Priors [52.586838532560755]
Rectified flow is a novel class of generative models that enforces a linear progression from the source to the target distribution.
We show that rectified flow approaches surpass in terms of generation quality and efficiency, requiring fewer inference steps.
Our method also displays competitive performance in image inversion and editing.
arXiv Detail & Related papers (2024-06-05T14:02:31Z) - Amortizing intractable inference in diffusion models for vision, language, and control [89.65631572949702]
This paper studies amortized sampling of the posterior over data, $mathbfxsim prm post(mathbfx)propto p(mathbfx)r(mathbfx)$, in a model that consists of a diffusion generative model prior $p(mathbfx)$ and a black-box constraint or function $r(mathbfx)$.
We prove the correctness of a data-free learning objective, relative trajectory balance, for training a diffusion model that samples from
arXiv Detail & Related papers (2024-05-31T16:18:46Z) - Multiple-Source Localization from a Single-Snapshot Observation Using Graph Bayesian Optimization [10.011338977476804]
Multi-source localization from a single snap-shot observation is especially relevant due to its prevalence.
Current methods typically utilizes and greedy selection, and they are usually bonded with one diffusion model.
We propose a simulation-based method termed BOSouL to approximate the results for its sample efficiency.
arXiv Detail & Related papers (2024-03-25T14:46:24Z) - Theoretical Insights for Diffusion Guidance: A Case Study for Gaussian
Mixture Models [59.331993845831946]
Diffusion models benefit from instillation of task-specific information into the score function to steer the sample generation towards desired properties.
This paper provides the first theoretical study towards understanding the influence of guidance on diffusion models in the context of Gaussian mixture models.
arXiv Detail & Related papers (2024-03-03T23:15:48Z) - Diffusion-based Graph Generative Methods [51.04666253001781]
We systematically and comprehensively review on diffusion-based graph generative methods.
We first make a review on three mainstream paradigms of diffusion methods, which are denoising diffusion models, score-based genrative models, and differential equations.
In the end, we point out some limitations of current studies and future directions of future explorations.
arXiv Detail & Related papers (2024-01-28T10:09:05Z) - Guided Diffusion from Self-Supervised Diffusion Features [49.78673164423208]
Guidance serves as a key concept in diffusion models, yet its effectiveness is often limited by the need for extra data annotation or pretraining.
We propose a framework to extract guidance from, and specifically for, diffusion models.
arXiv Detail & Related papers (2023-12-14T11:19:11Z) - Directional diffusion models for graph representation learning [9.457273750874357]
We propose a new class of models called it directional diffusion models
These models incorporate data-dependent, anisotropic, and directional noises in the forward diffusion process.
We conduct extensive experiments on 12 publicly available datasets, focusing on two distinct graph representation learning tasks.
arXiv Detail & Related papers (2023-06-22T21:27:48Z) - Diffusion Models are Minimax Optimal Distribution Estimators [49.47503258639454]
We provide the first rigorous analysis on approximation and generalization abilities of diffusion modeling.
We show that when the true density function belongs to the Besov space and the empirical score matching loss is properly minimized, the generated data distribution achieves the nearly minimax optimal estimation rates.
arXiv Detail & Related papers (2023-03-03T11:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.