Generalization Properties of Optimal Transport GANs with Latent
Distribution Learning
- URL: http://arxiv.org/abs/2007.14641v1
- Date: Wed, 29 Jul 2020 07:31:33 GMT
- Title: Generalization Properties of Optimal Transport GANs with Latent
Distribution Learning
- Authors: Giulia Luise, Massimiliano Pontil and Carlo Ciliberto
- Abstract summary: We study how the interplay between the latent distribution and the complexity of the pushforward map affects performance.
Motivated by our analysis, we advocate learning the latent distribution as well as the pushforward map within the GAN paradigm.
- Score: 52.25145141639159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Generative Adversarial Networks (GAN) framework is a well-established
paradigm for probability matching and realistic sample generation. While recent
attention has been devoted to studying the theoretical properties of such
models, a full theoretical understanding of the main building blocks is still
missing. Focusing on generative models with Optimal Transport metrics as
discriminators, in this work we study how the interplay between the latent
distribution and the complexity of the pushforward map (generator) affects
performance, from both statistical and modelling perspectives. Motivated by our
analysis, we advocate learning the latent distribution as well as the
pushforward map within the GAN paradigm. We prove that this can lead to
significant advantages in terms of sample complexity.
Related papers
- A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models [6.647819824559201]
We study the large-sample properties of a likelihood-based approach for estimating conditional deep generative models.
Our results lead to the convergence rate of a sieve maximum likelihood estimator for estimating the conditional distribution.
arXiv Detail & Related papers (2024-10-02T20:46:21Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Theoretical Insights for Diffusion Guidance: A Case Study for Gaussian
Mixture Models [59.331993845831946]
Diffusion models benefit from instillation of task-specific information into the score function to steer the sample generation towards desired properties.
This paper provides the first theoretical study towards understanding the influence of guidance on diffusion models in the context of Gaussian mixture models.
arXiv Detail & Related papers (2024-03-03T23:15:48Z) - Mapping the Multiverse of Latent Representations [17.2089620240192]
PRESTO is a principled framework for mapping the multiverse of machine-learning models that rely on latent representations.
Our framework uses persistent homology to characterize the latent spaces arising from different combinations of diverse machine-learning methods.
arXiv Detail & Related papers (2024-02-02T15:54:53Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Optimal Transport Model Distributional Robustness [33.24747882707421]
Previous works have mainly focused on exploiting distributional robustness in the data space.
We develop theories that enable us to learn the optimal robust center model distribution.
Our framework can be seen as a probabilistic extension of Sharpness-Aware Minimization.
arXiv Detail & Related papers (2023-06-07T06:15:12Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Learning Structured Gaussians to Approximate Deep Ensembles [10.055143995729415]
This paper proposes using a sparse-structured multivariate Gaussian to provide a closed-form approxorimator for dense image prediction tasks.
We capture the uncertainty and structured correlations in the predictions explicitly in a formal distribution, rather than implicitly through sampling alone.
We demonstrate the merits of our approach on monocular depth estimation and show that the advantages of our approach are obtained with comparable quantitative performance.
arXiv Detail & Related papers (2022-03-29T12:34:43Z) - Forward Super-Resolution: How Can GANs Learn Hierarchical Generative
Models for Real-World Distributions [66.05472746340142]
Generative networks (GAN) are among the most successful for learning high-complexity, real-world distributions.
In this paper we show how GANs can efficiently learn to the distribution of real-life images.
arXiv Detail & Related papers (2021-06-04T17:33:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.