Conditionally Strongly Log-Concave Generative Models
- URL: http://arxiv.org/abs/2306.00181v1
- Date: Wed, 31 May 2023 20:59:47 GMT
- Title: Conditionally Strongly Log-Concave Generative Models
- Authors: Florentin Guth, Etienne Lempereur, Joan Bruna, St\'ephane Mallat
- Abstract summary: We introduce conditionally strongly log-concave models, which factorize the data distribution into a product of conditional probability distributions that are strongly log-concave.
It leads to efficient parameter estimation and sampling algorithms, with theoretical guarantees, although the data distribution is not globally log-concave.
Numerical results are shown for physical fields such as the $varphi4$ model and weak lensing convergence maps with higher resolution than in previous works.
- Score: 33.79337785731899
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is a growing gap between the impressive results of deep image
generative models and classical algorithms that offer theoretical guarantees.
The former suffer from mode collapse or memorization issues, limiting their
application to scientific data. The latter require restrictive assumptions such
as log-concavity to escape the curse of dimensionality. We partially bridge
this gap by introducing conditionally strongly log-concave (CSLC) models, which
factorize the data distribution into a product of conditional probability
distributions that are strongly log-concave. This factorization is obtained
with orthogonal projectors adapted to the data distribution. It leads to
efficient parameter estimation and sampling algorithms, with theoretical
guarantees, although the data distribution is not globally log-concave. We show
that several challenging multiscale processes are conditionally log-concave
using wavelet packet orthogonal projectors. Numerical results are shown for
physical fields such as the $\varphi^4$ model and weak lensing convergence maps
with higher resolution than in previous works.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - BGDB: Bernoulli-Gaussian Decision Block with Improved Denoising Diffusion Probabilistic Models [8.332734198630813]
Generative models can enhance discriminative classifiers by constructing complex feature spaces.
We propose the Bernoulli-Gaussian Decision Block (BGDB), a novel module inspired by the Central Limit Theorem.
Specifically, we utilize Improved Denoising Diffusion Probabilistic Models (IDDPM) to model the probability of Bernoulli Trials.
arXiv Detail & Related papers (2024-09-19T22:52:55Z) - Unveil Conditional Diffusion Models with Classifier-free Guidance: A Sharp Statistical Theory [87.00653989457834]
Conditional diffusion models serve as the foundation of modern image synthesis and find extensive application in fields like computational biology and reinforcement learning.
Despite the empirical success, theory of conditional diffusion models is largely missing.
This paper bridges the gap by presenting a sharp statistical theory of distribution estimation using conditional diffusion models.
arXiv Detail & Related papers (2024-03-18T17:08:24Z) - Statistically Optimal Generative Modeling with Maximum Deviation from the Empirical Distribution [2.1146241717926664]
We show that the Wasserstein GAN, constrained to left-invertible push-forward maps, generates distributions that avoid replication and significantly deviate from the empirical distribution.
Our most important contribution provides a finite-sample lower bound on the Wasserstein-1 distance between the generative distribution and the empirical one.
We also establish a finite-sample upper bound on the distance between the generative distribution and the true data-generating one.
arXiv Detail & Related papers (2023-07-31T06:11:57Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Convergence for score-based generative modeling with polynomial
complexity [9.953088581242845]
We prove the first convergence guarantees for the core mechanic behind Score-based generative modeling.
Compared to previous works, we do not incur error that grows exponentially in time or that suffers from a curse of dimensionality.
We show that a predictor-corrector gives better convergence than using either portion alone.
arXiv Detail & Related papers (2022-06-13T14:57:35Z) - Universal Inference Meets Random Projections: A Scalable Test for Log-concavity [30.073886309373226]
We present the first test of log-concavity that is provably valid in finite samples in any dimension.
We find that a random projections approach that converts the d-dimensional testing problem into many one-dimensional problems can yield high power.
arXiv Detail & Related papers (2021-11-17T17:34:44Z) - Partial Counterfactual Identification from Observational and
Experimental Data [83.798237968683]
We develop effective Monte Carlo algorithms to approximate the optimal bounds from an arbitrary combination of observational and experimental data.
Our algorithms are validated extensively on synthetic and real-world datasets.
arXiv Detail & Related papers (2021-10-12T02:21:30Z) - Marginalizable Density Models [14.50261153230204]
We present a novel deep network architecture which provides closed form expressions for the probabilities, marginals and conditionals of any subset of the variables.
The model also allows for parallelized sampling with only a logarithmic dependence of the time complexity on the number of variables.
arXiv Detail & Related papers (2021-06-08T23:54:48Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.