Fast Estimation of Bayesian State Space Models Using Amortized
Simulation-Based Inference
- URL: http://arxiv.org/abs/2210.07154v1
- Date: Thu, 13 Oct 2022 16:37:05 GMT
- Title: Fast Estimation of Bayesian State Space Models Using Amortized
Simulation-Based Inference
- Authors: Ramis Khabibullin and Sergei Seleznev
- Abstract summary: This paper presents a fast algorithm for estimating hidden states of Bayesian state space models.
After pretraining, finding the posterior distribution for any dataset takes from hundredths to tenths of a second.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a fast algorithm for estimating hidden states of Bayesian
state space models. The algorithm is a variation of amortized simulation-based
inference algorithms, where a large number of artificial datasets are generated
at the first stage, and then a flexible model is trained to predict the
variables of interest. In contrast to those proposed earlier, the procedure
described in this paper makes it possible to train estimators for hidden states
by concentrating only on certain characteristics of the marginal posterior
distributions and introducing inductive bias. Illustrations using the examples
of the stochastic volatility model, nonlinear dynamic stochastic general
equilibrium model, and seasonal adjustment procedure with breaks in seasonality
show that the algorithm has sufficient accuracy for practical use. Moreover,
after pretraining, which takes several hours, finding the posterior
distribution for any dataset takes from hundredths to tenths of a second.
Related papers
- On conditional diffusion models for PDE simulations [53.01911265639582]
We study score-based diffusion models for forecasting and assimilation of sparse observations.
We propose an autoregressive sampling approach that significantly improves performance in forecasting.
We also propose a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths.
arXiv Detail & Related papers (2024-10-21T18:31:04Z) - A variational neural Bayes framework for inference on intractable posterior distributions [1.0801976288811024]
Posterior distributions of model parameters are efficiently obtained by feeding observed data into a trained neural network.
We show theoretically that our posteriors converge to the true posteriors in Kullback-Leibler divergence.
arXiv Detail & Related papers (2024-04-16T20:40:15Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Online Identification of Stochastic Continuous-Time Wiener Models Using
Sampled Data [4.037738063437126]
We develop an online estimation algorithm based on an output-error predictor for the identification of continuous-time Wiener models.
The method is robust with respect to the assumptions on the spectrum of the disturbance process.
arXiv Detail & Related papers (2024-03-09T12:33:09Z) - Refining Amortized Posterior Approximations using Gradient-Based Summary
Statistics [0.9176056742068814]
We present an iterative framework to improve the amortized approximations of posterior distributions in the context of inverse problems.
We validate our method in a controlled setting by applying it to a stylized problem, and observe improved posterior approximations with each iteration.
arXiv Detail & Related papers (2023-05-15T15:47:19Z) - Regularized Vector Quantization for Tokenized Image Synthesis [126.96880843754066]
Quantizing images into discrete representations has been a fundamental problem in unified generative modeling.
deterministic quantization suffers from severe codebook collapse and misalignment with inference stage while quantization suffers from low codebook utilization and reconstruction objective.
This paper presents a regularized vector quantization framework that allows to mitigate perturbed above issues effectively by applying regularization from two perspectives.
arXiv Detail & Related papers (2023-03-11T15:20:54Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Bayesian Imaging With Data-Driven Priors Encoded by Neural Networks:
Theory, Methods, and Algorithms [2.266704469122763]
This paper proposes a new methodology for performing Bayesian inference in imaging inverse problems where the prior knowledge is available in the form of training data.
We establish the existence and well-posedness of the associated posterior moments under easily verifiable conditions.
A model accuracy analysis suggests that the Bayesian probability probabilities reported by the data-driven models are also remarkably accurate under a frequentist definition.
arXiv Detail & Related papers (2021-03-18T11:34:08Z) - Improving Maximum Likelihood Training for Text Generation with Density
Ratio Estimation [51.091890311312085]
We propose a new training scheme for auto-regressive sequence generative models, which is effective and stable when operating at large sample space encountered in text generation.
Our method stably outperforms Maximum Likelihood Estimation and other state-of-the-art sequence generative models in terms of both quality and diversity.
arXiv Detail & Related papers (2020-07-12T15:31:24Z) - Accurate Characterization of Non-Uniformly Sampled Time Series using
Stochastic Differential Equations [0.0]
Non-uniform sampling arises when an experimenter does not have full control over the sampling characteristics of the process under investigation.
We introduce new initial estimates for the numerical optimization of the likelihood.
We show the increased accuracy achieved the new estimator in simulation experiments.
arXiv Detail & Related papers (2020-07-02T13:03:09Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.