On Maximum Likelihood Training of Score-Based Generative Models
- URL: http://arxiv.org/abs/2101.09258v1
- Date: Fri, 22 Jan 2021 18:22:29 GMT
- Title: On Maximum Likelihood Training of Score-Based Generative Models
- Authors: Conor Durkan and Yang Song
- Abstract summary: We show that an objective is equivalent to maximum likelihood for certain choices of mixture weighting.
We show that both maximum likelihood training and test-time log-likelihood evaluation can be achieved through parameterization of the score function alone.
- Score: 17.05208572228308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Score-based generative modeling has recently emerged as a promising
alternative to traditional likelihood-based or implicit approaches. Learning in
score-based models involves first perturbing data with a continuous-time
stochastic process, and then matching the time-dependent gradient of the
logarithm of the noisy data density - or score function - using a continuous
mixture of score matching losses. In this note, we show that such an objective
is equivalent to maximum likelihood for certain choices of mixture weighting.
This connection provides a principled way to weight the objective function, and
justifies its use for comparing different score-based generative models. Taken
together with previous work, our result reveals that both maximum likelihood
training and test-time log-likelihood evaluation can be achieved through
parameterization of the score function alone, without the need to explicitly
parameterize a density function.
Related papers
- Leveraging Uncertainty Estimates To Improve Classifier Performance [4.4951754159063295]
Binary classification involves predicting the label of an instance based on whether the model score for the positive class exceeds a threshold chosen based on the application requirements.
However, model scores are often not aligned with the true positivity rate.
This is especially true when the training involves a differential sampling across classes or there is distributional drift between train and test settings.
arXiv Detail & Related papers (2023-11-20T12:40:25Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Stochastic Interpolants: A Unifying Framework for Flows and Diffusions [16.95541777254722]
A class of generative models that unifies flow-based and diffusion-based methods is introduced.
These models extend the framework proposed in Albergo & VandenEijnden (2023), enabling the use of a broad class of continuous-time processes called stochastic interpolants'
These interpolants are built by combining data from the two prescribed densities with an additional latent variable that shapes the bridge in a flexible way.
arXiv Detail & Related papers (2023-03-15T17:43:42Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Concrete Score Matching: Generalized Score Matching for Discrete Data [109.12439278055213]
"Concrete score" is a generalization of the (Stein) score for discrete settings.
"Concrete Score Matching" is a framework to learn such scores from samples.
arXiv Detail & Related papers (2022-11-02T00:41:37Z) - Statistical Efficiency of Score Matching: The View from Isoperimetry [96.65637602827942]
We show a tight connection between statistical efficiency of score matching and the isoperimetric properties of the distribution being estimated.
We formalize these results both in the sample regime and in the finite regime.
arXiv Detail & Related papers (2022-10-03T06:09:01Z) - Denoising Likelihood Score Matching for Conditional Score-based Data
Generation [22.751924447125955]
We propose a novel training objective called Denoising Likelihood Score Matching (DLSM) loss to match the gradients of the true log likelihood density.
Our experimental evidence shows that the proposed method outperforms the previous methods noticeably in terms of several key evaluation metrics.
arXiv Detail & Related papers (2022-03-27T04:37:54Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - On the Discrepancy between Density Estimation and Sequence Generation [92.70116082182076]
log-likelihood is highly correlated with BLEU when we consider models within the same family.
We observe no correlation between rankings of models across different families.
arXiv Detail & Related papers (2020-02-17T20:13:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.