Reducing the Amortization Gap in Variational Autoencoders: A Bayesian
Random Function Approach
- URL: http://arxiv.org/abs/2102.03151v1
- Date: Fri, 5 Feb 2021 13:01:12 GMT
- Title: Reducing the Amortization Gap in Variational Autoencoders: A Bayesian
Random Function Approach
- Authors: Minyoung Kim, Vladimir Pavlovic
- Abstract summary: Inference in our GP model is done by a single feed forward pass through the network, significantly faster than semi-amortized methods.
We show that our approach attains higher test data likelihood than the state-of-the-arts on several benchmark datasets.
- Score: 38.45568741734893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational autoencoder (VAE) is a very successful generative model whose key
element is the so called amortized inference network, which can perform test
time inference using a single feed forward pass. Unfortunately, this comes at
the cost of degraded accuracy in posterior approximation, often underperforming
the instance-wise variational optimization. Although the latest semi-amortized
approaches mitigate the issue by performing a few variational optimization
updates starting from the VAE's amortized inference output, they inherently
suffer from computational overhead for inference at test time. In this paper,
we address the problem in a completely different way by considering a random
inference model, where we model the mean and variance functions of the
variational posterior as random Gaussian processes (GP). The motivation is that
the deviation of the VAE's amortized posterior distribution from the true
posterior can be regarded as random noise, which allows us to take into account
the uncertainty in posterior approximation in a principled manner. In
particular, our model can quantify the difficulty in posterior approximation by
a Gaussian variational density. Inference in our GP model is done by a single
feed forward pass through the network, significantly faster than semi-amortized
methods. We show that our approach attains higher test data likelihood than the
state-of-the-arts on several benchmark datasets.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - DistPred: A Distribution-Free Probabilistic Inference Method for Regression and Forecasting [14.390842560217743]
We propose a novel approach called DistPred for regression and forecasting tasks.
We transform proper scoring rules that measure the discrepancy between the predicted distribution and the target distribution into a differentiable discrete form.
This allows the model to sample numerous samples in a single forward pass to estimate the potential distribution of the response variable.
arXiv Detail & Related papers (2024-06-17T10:33:00Z) - Rényi Neural Processes [14.11793373584558]
We propose R'enyi Neural Processes (RNP) to ameliorate the impacts of prior misspecification.
We scale the density ratio $fracpq$ by the power of (1-$alpha$) in the divergence gradients with respect to the posterior.
Our experiments show consistent log-likelihood improvements over state-of-the-art NP family models.
arXiv Detail & Related papers (2024-05-25T00:14:55Z) - Variational Bayes image restoration with compressive autoencoders [4.879530644978008]
Regularization of inverse problems is of paramount importance in computational imaging.
In this work, we first propose to use compressive autoencoders instead of state-of-the-art generative models.
As a second contribution, we introduce the Variational Bayes Latent Estimation (VBLE) algorithm.
arXiv Detail & Related papers (2023-11-29T15:49:31Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Post-Processing Temporal Action Detection [134.26292288193298]
Temporal Action Detection (TAD) methods typically take a pre-processing step in converting an input varying-length video into a fixed-length snippet representation sequence.
This pre-processing step would temporally downsample the video, reducing the inference resolution and hampering the detection performance in the original temporal resolution.
We introduce a novel model-agnostic post-processing method without model redesign and retraining.
arXiv Detail & Related papers (2022-11-27T19:50:37Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Variational Variance: Simple, Reliable, Calibrated Heteroscedastic Noise
Variance Parameterization [3.553493344868413]
We propose critiques to test predictive mean and variance calibration and the predictive distribution's ability to generate sensible data.
We find that our solution, to treat heteroscedastic variance variationally, sufficiently regularizes variance to pass these PPCs.
arXiv Detail & Related papers (2020-06-08T19:58:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.