Relay Variational Inference: A Method for Accelerated Encoderless VI
- URL: http://arxiv.org/abs/2110.13422v1
- Date: Tue, 26 Oct 2021 05:48:00 GMT
- Title: Relay Variational Inference: A Method for Accelerated Encoderless VI
- Authors: Amir Zadeh, Santiago Benoit, Louis-Philippe Morency
- Abstract summary: Relay VI is a framework that dramatically improves the convergence and performance of encoderless VI.
We study the effectiveness of RVI in terms of convergence speed, loss, representation power and missing data imputation.
- Score: 47.72653430712088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational Inference (VI) offers a method for approximating intractable
likelihoods. In neural VI, inference of approximate posteriors is commonly done
using an encoder. Alternatively, encoderless VI offers a framework for learning
generative models from data without encountering suboptimalities caused by
amortization via an encoder (e.g. in presence of missing or uncertain data).
However, in absence of an encoder, such methods often suffer in convergence due
to the slow nature of gradient steps required to learn the approximate
posterior parameters. In this paper, we introduce Relay VI (RVI), a framework
that dramatically improves both the convergence and performance of encoderless
VI. In our experiments over multiple datasets, we study the effectiveness of
RVI in terms of convergence speed, loss, representation power and missing data
imputation. We find RVI to be a unique tool, often superior in both performance
and convergence speed to previously proposed encoderless as well as amortized
VI models (e.g. VAE).
Related papers
- ASPIRE: Iterative Amortized Posterior Inference for Bayesian Inverse Problems [0.974963895316339]
New advances in machine learning and variational inference (VI) have lowered the computational barrier by learning from examples.
Two VI paradigms have emerged that represent different tradeoffs: amortized and non-amortized.
We present a solution that enables iterative improvement of amortized posteriors that uses the same networks architectures and training data.
arXiv Detail & Related papers (2024-05-08T20:03:12Z) - Efficient Training of Probabilistic Neural Networks for Survival Analysis [0.6437284704257459]
Variational Inference (VI) is a commonly used technique for approximate Bayesian inference and uncertainty estimation in deep learning models.
It comes at a computational cost, as it doubles the number of trainable parameters to represent uncertainty.
We investigate how to train deep probabilistic survival models in large datasets without introducing additional overhead in model complexity.
arXiv Detail & Related papers (2024-04-09T16:10:39Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - DynImp: Dynamic Imputation for Wearable Sensing Data Through Sensory and
Temporal Relatedness [78.98998551326812]
We argue that traditional methods have rarely made use of both times-series dynamics of the data as well as the relatedness of the features from different sensors.
We propose a model, termed as DynImp, to handle different time point's missingness with nearest neighbors along feature axis.
We show that the method can exploit the multi-modality features from related sensors and also learn from history time-series dynamics to reconstruct the data under extreme missingness.
arXiv Detail & Related papers (2022-09-26T21:59:14Z) - Amortized Variational Inference: A Systematic Review [0.0]
The core principle of Variational Inference (VI) is to convert the statistical inference problem of computing complex posterior probability densities into a tractable optimization problem.
The traditional VI algorithm is not scalable to large data sets and is unable to readily infer out-of-bounds data points.
Recent developments in the field, like black box-, and amortized-VI, have helped address these issues.
arXiv Detail & Related papers (2022-09-22T09:45:10Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Missing Value Imputation on Multidimensional Time Series [16.709162372224355]
We present DeepMVI, a deep learning method for missing value imputation in multidimensional time-series datasets.
DeepMVI combines fine-grained and coarse-grained patterns along a time series, and trends from related series across categorical dimensions.
Experiments show that DeepMVI is significantly more accurate, reducing error by more than 50% in more than half the cases.
arXiv Detail & Related papers (2021-03-02T09:55:05Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - Meta-Learning Divergences of Variational Inference [49.164944557174294]
Variational inference (VI) plays an essential role in approximate Bayesian inference.
We propose a meta-learning algorithm to learn the divergence metric suited for the task of interest.
We demonstrate our approach outperforms standard VI on Gaussian mixture distribution approximation.
arXiv Detail & Related papers (2020-07-06T17:43:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.