Amortized backward variational inference in nonlinear state-space models
- URL: http://arxiv.org/abs/2206.00319v1
- Date: Wed, 1 Jun 2022 08:35:54 GMT
- Title: Amortized backward variational inference in nonlinear state-space models
- Authors: Mathis Chagneux, \'Elisabeth Gassiat (LMO), Pierre Gloaguen (MIA
Paris-Saclay), Sylvain Le Corff (IP Paris, TSP, SAMOVAR)
- Abstract summary: We consider the problem of state estimation in general state-space models using variational inference.
We establish for the first time that, under mixing assumptions, the variational approximation of expectations of additive state functionals induces an error which grows at most linearly in the number of observations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of state estimation in general state-space models
using variational inference. For a generic variational family defined using the
same backward decomposition as the actual joint smoothing distribution, we
establish for the first time that, under mixing assumptions, the variational
approximation of expectations of additive state functionals induces an error
which grows at most linearly in the number of observations. This guarantee is
consistent with the known upper bounds for the approximation of smoothing
distributions using standard Monte Carlo methods. Moreover, we propose an
amortized inference framework where a neural network shared over all times
steps outputs the parameters of the variational kernels. We also study
empirically parametrizations which allow analytical marginalization of the
variational distributions, and therefore lead to efficient smoothing
algorithms. Significant improvements are made over state-of-the art variational
solutions, especially when the generative model depends on a strongly nonlinear
and noninjective mixing function.
Related papers
- Generalized Laplace Approximation [23.185126261153236]
We introduce a unified theoretical framework to attribute Bayesian inconsistency to model misspecification and inadequate priors.
We propose the generalized Laplace approximation, which involves a simple adjustment to the Hessian matrix of the regularized loss function.
We assess the performance and properties of the generalized Laplace approximation on state-of-the-art neural networks and real-world datasets.
arXiv Detail & Related papers (2024-05-22T11:11:42Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Robust scalable initialization for Bayesian variational inference with
multi-modal Laplace approximations [0.0]
Variational mixtures with full-covariance structures suffer from a quadratic growth due to variational parameters with the number of parameters.
We propose a method for constructing an initial Gaussian model approximation that can be used to warm-start variational inference.
arXiv Detail & Related papers (2023-07-12T19:30:04Z) - Variational Nonlinear Kalman Filtering with Unknown Process Noise
Covariance [24.23243651301339]
This paper presents a solution for identification of nonlinear state estimation and model parameters based on the approximate Bayesian inference principle.
The performance of the proposed method is verified on radar target tracking applications by both simulated and real-world data.
arXiv Detail & Related papers (2023-05-06T03:34:39Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Loss function based second-order Jensen inequality and its application
to particle variational inference [112.58907653042317]
Particle variational inference (PVI) uses an ensemble of models as an empirical approximation for the posterior distribution.
PVI iteratively updates each model with a repulsion force to ensure the diversity of the optimized models.
We derive a novel generalization error bound and show that it can be reduced by enhancing the diversity of models.
arXiv Detail & Related papers (2021-06-09T12:13:51Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Moment-Based Variational Inference for Stochastic Differential Equations [31.494103873662343]
We construct the variational process as a controlled version of the prior process.
We approximate the posterior by a set of moment functions.
In combination with moment closure, the smoothing problem is reduced to a deterministic optimal control problem.
arXiv Detail & Related papers (2021-03-01T13:20:38Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Gaussian Variational State Estimation for Nonlinear State-Space Models [0.3222802562733786]
We consider the problem of state estimation, in the context of both filtering and smoothing, for nonlinear state-space models.
We develop an assumed Gaussian solution based on variational inference, which offers the key advantage of a flexible, but principled, mechanism for approxing the required distributions.
arXiv Detail & Related papers (2020-02-07T04:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.