Denoising Likelihood Score Matching for Conditional Score-based Data
Generation
- URL: http://arxiv.org/abs/2203.14206v1
- Date: Sun, 27 Mar 2022 04:37:54 GMT
- Title: Denoising Likelihood Score Matching for Conditional Score-based Data
Generation
- Authors: Chen-Hao Chao, Wei-Fang Sun, Bo-Wun Cheng, Yi-Chen Lo, Chia-Che Chang,
Yu-Lun Liu, Yu-Lin Chang, Chia-Ping Chen, Chun-Yi Lee
- Abstract summary: We propose a novel training objective called Denoising Likelihood Score Matching (DLSM) loss to match the gradients of the true log likelihood density.
Our experimental evidence shows that the proposed method outperforms the previous methods noticeably in terms of several key evaluation metrics.
- Score: 22.751924447125955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many existing conditional score-based data generation methods utilize Bayes'
theorem to decompose the gradients of a log posterior density into a mixture of
scores. These methods facilitate the training procedure of conditional score
models, as a mixture of scores can be separately estimated using a score model
and a classifier. However, our analysis indicates that the training objectives
for the classifier in these methods may lead to a serious score mismatch issue,
which corresponds to the situation that the estimated scores deviate from the
true ones. Such an issue causes the samples to be misled by the deviated scores
during the diffusion process, resulting in a degraded sampling quality. To
resolve it, we formulate a novel training objective, called Denoising
Likelihood Score Matching (DLSM) loss, for the classifier to match the
gradients of the true log likelihood density. Our experimental evidence shows
that the proposed method outperforms the previous methods on both Cifar-10 and
Cifar-100 benchmarks noticeably in terms of several key evaluation metrics. We
thus conclude that, by adopting DLSM, the conditional scores can be accurately
modeled, and the effect of the score mismatch issue is alleviated.
Related papers
- Covariate Assisted Entity Ranking with Sparse Intrinsic Scores [3.2839905453386162]
We introduce novel model identification conditions and examine the regularized penalized Maximum Likelihood Estimator statistical rates.
We also apply our method to the goodness-of-fit test for models with no latent intrinsic scores.
arXiv Detail & Related papers (2024-07-09T19:58:54Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Target Score Matching [36.80075781966174]
We show that it is possible to leverage knowledge of the target score.
We present a Target Score Identity and corresponding Target Score Matching regression loss.
arXiv Detail & Related papers (2024-02-13T18:48:28Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Concrete Score Matching: Generalized Score Matching for Discrete Data [109.12439278055213]
"Concrete score" is a generalization of the (Stein) score for discrete settings.
"Concrete Score Matching" is a framework to learn such scores from samples.
arXiv Detail & Related papers (2022-11-02T00:41:37Z) - FP-Diffusion: Improving Score-based Diffusion Models by Enforcing the
Underlying Score Fokker-Planck Equation [72.19198763459448]
We learn a family of noise-conditional score functions corresponding to the data density perturbed with increasingly large amounts of noise.
These perturbed data densities are linked together by the Fokker-Planck equation (FPE), a partial differential equation (PDE) governing the spatial-temporal evolution of a density.
We derive a corresponding equation called the score FPE that characterizes the noise-conditional scores of the perturbed data densities.
arXiv Detail & Related papers (2022-10-09T16:27:25Z) - Evaluating State-of-the-Art Classification Models Against Bayes
Optimality [106.50867011164584]
We show that we can compute the exact Bayes error of generative models learned using normalizing flows.
We use our approach to conduct a thorough investigation of state-of-the-art classification models.
arXiv Detail & Related papers (2021-06-07T06:21:20Z) - On Maximum Likelihood Training of Score-Based Generative Models [17.05208572228308]
We show that an objective is equivalent to maximum likelihood for certain choices of mixture weighting.
We show that both maximum likelihood training and test-time log-likelihood evaluation can be achieved through parameterization of the score function alone.
arXiv Detail & Related papers (2021-01-22T18:22:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.