Bayesian Flow Networks
- URL: http://arxiv.org/abs/2308.07037v5
- Date: Sat, 3 Feb 2024 20:22:57 GMT
- Title: Bayesian Flow Networks
- Authors: Alex Graves, Rupesh Kumar Srivastava, Timothy Atkinson, Faustino Gomez
- Abstract summary: This paper introduces Bayesian Flow Networks (BFNs), a new class of generative model in which the parameters of a set of independent distributions are modified with Bayesian inference.
Starting from a simple prior and iteratively updating the two distributions yields a generative procedure similar to the reverse process of diffusion models.
BFNs achieve competitive log-likelihoods for image modelling on dynamically binarized MNIST and CIFAR-10, and outperform all known discrete diffusion models on the text8 character-level language modelling task.
- Score: 4.585102332532472
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces Bayesian Flow Networks (BFNs), a new class of
generative model in which the parameters of a set of independent distributions
are modified with Bayesian inference in the light of noisy data samples, then
passed as input to a neural network that outputs a second, interdependent
distribution. Starting from a simple prior and iteratively updating the two
distributions yields a generative procedure similar to the reverse process of
diffusion models; however it is conceptually simpler in that no forward process
is required. Discrete and continuous-time loss functions are derived for
continuous, discretised and discrete data, along with sample generation
procedures. Notably, the network inputs for discrete data lie on the
probability simplex, and are therefore natively differentiable, paving the way
for gradient-based sample guidance and few-step generation in discrete domains
such as language modelling. The loss function directly optimises data
compression and places no restrictions on the network architecture. In our
experiments BFNs achieve competitive log-likelihoods for image modelling on
dynamically binarized MNIST and CIFAR-10, and outperform all known discrete
diffusion models on the text8 character-level language modelling task.
Related papers
- Constrained Diffusion Models via Dual Training [80.03953599062365]
Diffusion processes are prone to generating samples that reflect biases in a training dataset.
We develop constrained diffusion models by imposing diffusion constraints based on desired distributions.
We show that our constrained diffusion models generate new data from a mixture data distribution that achieves the optimal trade-off among objective and constraints.
arXiv Detail & Related papers (2024-08-27T14:25:42Z) - Discrete Flow Matching [74.04153927689313]
We present a novel discrete flow paradigm designed specifically for generating discrete data.
Our approach is capable of generating high-quality discrete data in a non-autoregressive fashion.
arXiv Detail & Related papers (2024-07-22T12:33:27Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Uncertainty quantification and out-of-distribution detection using
surjective normalizing flows [46.51077762143714]
We propose a simple approach using surjective normalizing flows to identify out-of-distribution data sets in deep neural network models.
We show that our method can reliably discern out-of-distribution data from in-distribution data.
arXiv Detail & Related papers (2023-11-01T09:08:35Z) - Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - The Score-Difference Flow for Implicit Generative Modeling [1.309716118537215]
Implicit generative modeling aims to produce samples of synthetic data matching a target data distribution.
Recent work has approached the IGM problem from the perspective of pushing synthetic source data toward the target distribution.
We present the score difference between arbitrary target and source distributions as a flow that optimally reduces the Kullback-Leibler divergence between them.
arXiv Detail & Related papers (2023-04-25T15:21:12Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - From Points to Functions: Infinite-dimensional Representations in
Diffusion Models [23.916417852496608]
Diffusion-based generative models learn to iteratively transfer unstructured noise to a complex target distribution.
We show that a combination of information content from different time steps gives a strictly better representation for the downstream task.
arXiv Detail & Related papers (2022-10-25T05:30:53Z) - Discrete Denoising Flows [87.44537620217673]
We introduce a new discrete flow-based model for categorical random variables: Discrete Denoising Flows (DDFs)
In contrast with other discrete flow-based models, our model can be locally trained without introducing gradient bias.
We show that DDFs outperform Discrete Flows on modeling a toy example, binary MNIST and Cityscapes segmentation maps, measured in log-likelihood.
arXiv Detail & Related papers (2021-07-24T14:47:22Z) - Variational Mixture of Normalizing Flows [0.0]
Deep generative models, such as generative adversarial networks autociteGAN, variational autoencoders autocitevaepaper, and their variants, have seen wide adoption for the task of modelling complex data distributions.
Normalizing flows have overcome this limitation by leveraging the change-of-suchs formula for probability density functions.
The present work overcomes this by using normalizing flows as components in a mixture model and devising an end-to-end training procedure for such a model.
arXiv Detail & Related papers (2020-09-01T17:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.