Expectation propagation for the smoothing distribution in dynamic probit
- URL: http://arxiv.org/abs/2309.01641v1
- Date: Mon, 4 Sep 2023 14:49:45 GMT
- Title: Expectation propagation for the smoothing distribution in dynamic probit
- Authors: Niccol\`o Anceschi, Augusto Fasano, Giovanni Rebaudo
- Abstract summary: The smoothing distribution of dynamic probit models with Gaussian state dynamics was recently proved to belong to the unified skew-normal family.
We derive an efficient EP routine to perform inference for such a distribution.
We show that the proposed approximation leads to accuracy gains over available approximate algorithms in a financial illustration.
- Score: 1.6114012813668932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The smoothing distribution of dynamic probit models with Gaussian state
dynamics was recently proved to belong to the unified skew-normal family.
Although this is computationally tractable in small-to-moderate settings, it
may become computationally impractical in higher dimensions. In this work,
adapting a recent more general class of expectation propagation (EP)
algorithms, we derive an efficient EP routine to perform inference for such a
distribution. We show that the proposed approximation leads to accuracy gains
over available approximate algorithms in a financial illustration.
Related papers
- Efficient expectation propagation for posterior approximation in
high-dimensional probit models [1.433758865948252]
We focus on the expectation propagation (EP) approximation of the posterior distribution in Bayesian probit regression.
We show how to leverage results on the extended multivariate skew-normal distribution to derive an efficient implementation of the EP routine.
This makes EP computationally feasible also in challenging high-dimensional settings, as shown in a detailed simulation study.
arXiv Detail & Related papers (2023-09-04T14:07:19Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Generalizing Gaussian Smoothing for Random Search [23.381986209234164]
Gaussian smoothing (GS) is a derivative-free optimization algorithm that estimates the gradient of an objective using perturbations of the current benchmarks.
We propose to choose a distribution for perturbations that minimizes the error of such distributions with provably smaller MSE.
arXiv Detail & Related papers (2022-11-27T04:42:05Z) - Ensemble Multi-Quantiles: Adaptively Flexible Distribution Prediction
for Uncertainty Quantification [4.728311759896569]
We propose a novel, succinct, and effective approach for distribution prediction to quantify uncertainty in machine learning.
It incorporates adaptively flexible distribution prediction of $mathbbP(mathbfy|mathbfX=x)$ in regression tasks.
On extensive regression tasks from UCI datasets, we show that EMQ achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-26T11:45:32Z) - Distributional Gradient Boosting Machines [77.34726150561087]
Our framework is based on XGBoost and LightGBM.
We show that our framework achieves state-of-the-art forecast accuracy.
arXiv Detail & Related papers (2022-04-02T06:32:19Z) - Improving predictions of Bayesian neural nets via local linearization [79.21517734364093]
We argue that the Gauss-Newton approximation should be understood as a local linearization of the underlying Bayesian neural network (BNN)
Because we use this linearized model for posterior inference, we should also predict using this modified model instead of the original one.
We refer to this modified predictive as "GLM predictive" and show that it effectively resolves common underfitting problems of the Laplace approximation.
arXiv Detail & Related papers (2020-08-19T12:35:55Z) - Tensor Networks contraction and the Belief Propagation algorithm [0.0]
Belief propagation is a well-studied message-passing algorithm that runs over graphical models.
We show how it can be adapted to the world of PEPS tensor networks and used as an approximate contraction scheme.
arXiv Detail & Related papers (2020-08-10T22:03:25Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z) - A maximum-entropy approach to off-policy evaluation in average-reward
MDPs [54.967872716145656]
This work focuses on off-policy evaluation (OPE) with function approximation in infinite-horizon undiscounted Markov decision processes (MDPs)
We provide the first finite-sample OPE error bound, extending existing results beyond the episodic and discounted cases.
We show that this results in an exponential-family distribution whose sufficient statistics are the features, paralleling maximum-entropy approaches in supervised learning.
arXiv Detail & Related papers (2020-06-17T18:13:37Z) - Mean-Field Approximation to Gaussian-Softmax Integral with Application
to Uncertainty Estimation [23.38076756988258]
We propose a new single-model based approach to quantify uncertainty in deep neural networks.
We use a mean-field approximation formula to compute an analytically intractable integral.
Empirically, the proposed approach performs competitively when compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-06-13T07:32:38Z) - Gaussianization Flows [113.79542218282282]
We propose a new type of normalizing flow model that enables both efficient iteration of likelihoods and efficient inversion for sample generation.
Because of this guaranteed expressivity, they can capture multimodal target distributions without compromising the efficiency of sample generation.
arXiv Detail & Related papers (2020-03-04T08:15:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.