Quantitative Understanding of VAE as a Non-linearly Scaled Isometric
Embedding
- URL: http://arxiv.org/abs/2007.15190v3
- Date: Sat, 12 Jun 2021 04:51:47 GMT
- Title: Quantitative Understanding of VAE as a Non-linearly Scaled Isometric
Embedding
- Authors: Akira Nakagawa, Keizo Kato, Taiji Suzuki
- Abstract summary: Variational autoencoder (VAE) estimates the posterior parameters of latent variables corresponding to each input data.
This paper provides a quantitative understanding of VAE property through the differential geometric and information-theoretic interpretations of VAE.
- Score: 52.48298164494608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational autoencoder (VAE) estimates the posterior parameters (mean and
variance) of latent variables corresponding to each input data. While it is
used for many tasks, the transparency of the model is still an underlying
issue. This paper provides a quantitative understanding of VAE property through
the differential geometric and information-theoretic interpretations of VAE.
According to the Rate-distortion theory, the optimal transform coding is
achieved by using an orthonormal transform with PCA basis where the transform
space is isometric to the input. Considering the analogy of transform coding to
VAE, we clarify theoretically and experimentally that VAE can be mapped to an
implicit isometric embedding with a scale factor derived from the posterior
parameter. As a result, we can estimate the data probabilities in the input
space from the prior, loss metrics, and corresponding posterior parameters, and
further, the quantitative importance of each latent variable can be evaluated
like the eigenvalue of PCA.
Related papers
- Half-VAE: An Encoder-Free VAE to Bypass Explicit Inverse Mapping [5.212606755867746]
Inference and inverse problems are closely related concepts, both fundamentally involving the deduction of unknown causes or parameters from observed data.
This study explores the potential of VAEs for solving inverse problems, such as Independent Component Analysis (ICA)
Unlike other VAE-based ICA methods, this approach discards the encoder in the VAE architecture, directly setting the latent variables as trainable parameters.
arXiv Detail & Related papers (2024-09-06T09:11:15Z) - Poisson Variational Autoencoder [0.0]
Variational autoencoders (VAE) employ Bayesian inference to interpret sensory inputs.
Here, we develop a novel architecture that combines principles of predictive coding with a VAE that encodes inputs into discrete spike counts.
Our work provides an interpretable computational framework to study brain-like sensory processing.
arXiv Detail & Related papers (2024-05-23T12:02:54Z) - Matching aggregate posteriors in the variational autoencoder [0.5759862457142761]
The variational autoencoder (VAE) is a well-studied, deep, latent-variable model (DLVM)
This paper addresses shortcomings in VAEs by reformulating the objective function associated with VAEs in order to match the aggregate/marginal posterior distribution to the prior.
The proposed method is named the emphaggregate variational autoencoder (AVAE) and is built on the theoretical framework of the VAE.
arXiv Detail & Related papers (2023-11-13T19:22:37Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Weight Vector Tuning and Asymptotic Analysis of Binary Linear
Classifiers [82.5915112474988]
This paper proposes weight vector tuning of a generic binary linear classifier through the parameterization of a decomposition of the discriminant by a scalar.
It is also found that weight vector tuning significantly improves the performance of Linear Discriminant Analysis (LDA) under high estimation noise.
arXiv Detail & Related papers (2021-10-01T17:50:46Z) - InteL-VAEs: Adding Inductive Biases to Variational Auto-Encoders via
Intermediary Latents [60.785317191131284]
We introduce a simple and effective method for learning VAEs with controllable biases by using an intermediary set of latent variables.
In particular, it allows us to impose desired properties like sparsity or clustering on learned representations.
We show that this, in turn, allows InteL-VAEs to learn both better generative models and representations.
arXiv Detail & Related papers (2021-06-25T16:34:05Z) - Factor Analysis, Probabilistic Principal Component Analysis, Variational
Inference, and Variational Autoencoder: Tutorial and Survey [5.967999555890417]
This tutorial and survey paper on factor analysis, probabilistic Principal Component Analysis (PCA), variational inference, and Variational Autoencoder (VAE)
They asssume that every data point is generated from or caused by a low-dimensional latent factor.
For their inference and generative behaviour, these models can also be used for generation of new data points in the data space.
arXiv Detail & Related papers (2021-01-04T01:29:09Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - tvGP-VAE: Tensor-variate Gaussian Process Prior Variational Autoencoder [0.0]
tvGP-VAE is able to explicitly model correlation via the use of kernel functions.
We show that the choice of which correlation structures to explicitly represent in the latent space has a significant impact on model performance.
arXiv Detail & Related papers (2020-06-08T17:59:13Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.