Any Variational Autoencoder Can Do Arbitrary Conditioning
- URL: http://arxiv.org/abs/2201.12414v1
- Date: Fri, 28 Jan 2022 20:48:44 GMT
- Title: Any Variational Autoencoder Can Do Arbitrary Conditioning
- Authors: Ryan R. Strauss, Junier B. Oliva
- Abstract summary: Posterior Matching enables any Variational Autoencoder to perform arbitrary conditioning without modification to the VAE itself.
We find that Posterior Matching achieves performance that is comparable or superior to current state-of-the-art methods for a variety of tasks.
- Score: 7.96091289659041
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Arbitrary conditioning is an important problem in unsupervised learning,
where we seek to model the conditional densities $p(\mathbf{x}_u \mid
\mathbf{x}_o)$ that underly some data, for all possible non-intersecting
subsets $o, u \subset \{1, \dots , d\}$. However, the vast majority of density
estimation only focuses on modeling the joint distribution $p(\mathbf{x})$, in
which important conditional dependencies between features are opaque. We
propose a simple and general framework, coined Posterior Matching, that enables
any Variational Autoencoder (VAE) to perform arbitrary conditioning, without
modification to the VAE itself. Posterior Matching applies to the numerous
existing VAE-based approaches to joint density estimation, thereby
circumventing the specialized models required by previous approaches to
arbitrary conditioning. We find that Posterior Matching achieves performance
that is comparable or superior to current state-of-the-art methods for a
variety of tasks.
Related papers
- Conformalization of Sparse Generalized Linear Models [2.1485350418225244]
Conformal prediction method estimates a confidence set for $y_n+1$ that is valid for any finite sample size.
Although attractive, computing such a set is computationally infeasible in most regression problems.
We show how our path-following algorithm accurately approximates conformal prediction sets.
arXiv Detail & Related papers (2023-07-11T08:36:12Z) - Diffusion models as plug-and-play priors [98.16404662526101]
We consider the problem of inferring high-dimensional data $mathbfx$ in a model that consists of a prior $p(mathbfx)$ and an auxiliary constraint $c(mathbfx,mathbfy)$.
The structure of diffusion models allows us to perform approximate inference by iterating differentiation through the fixed denoising network enriched with different amounts of noise.
arXiv Detail & Related papers (2022-06-17T21:11:36Z) - On the Generative Utility of Cyclic Conditionals [103.1624347008042]
We study whether and how can we model a joint distribution $p(x,z)$ using two conditional models $p(x|z)$ that form a cycle.
We propose the CyGen framework for cyclic-conditional generative modeling, including methods to enforce compatibility and use the determined distribution to fit and generate data.
arXiv Detail & Related papers (2021-06-30T10:23:45Z) - Iterative Feature Matching: Toward Provable Domain Generalization with
Logarithmic Environments [55.24895403089543]
Domain generalization aims at performing well on unseen test environments with data from a limited number of training environments.
We present a new algorithm based on performing iterative feature matching that is guaranteed with high probability to yield a predictor that generalizes after seeing only $O(logd_s)$ environments.
arXiv Detail & Related papers (2021-06-18T04:39:19Z) - Arbitrary Conditional Distributions with Energy [11.081460215563633]
A more general and useful problem is arbitrary conditional density estimation.
We propose a novel method, Arbitrary Conditioning with Energy (ACE), that can simultaneously estimate the distribution $p(mathbfx_u mid mathbfx_o)$.
We also simplify the learning problem by only learning one-dimensional conditionals, from which more complex distributions can be recovered during inference.
arXiv Detail & Related papers (2021-02-08T18:36:26Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Posterior Differential Regularization with f-divergence for Improving
Model Robustness [95.05725916287376]
We focus on methods that regularize the model posterior difference between clean and noisy inputs.
We generalize the posterior differential regularization to the family of $f$-divergences.
Our experiments show that regularizing the posterior differential with $f$-divergence can result in well-improved model robustness.
arXiv Detail & Related papers (2020-10-23T19:58:01Z) - Kidney Exchange with Inhomogeneous Edge Existence Uncertainty [33.17472228570093]
We aim to maximize a matched cycle and chain packing problem, where we aim to identify structures in a directed graph to the edge of failure.
Our approaches on data from the United for Sharing (SUNO) provides better performance with the same weights as as an SAA-based method.
arXiv Detail & Related papers (2020-07-07T04:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.