Variational bagging: a robust approach for Bayesian uncertainty quantification
- URL: http://arxiv.org/abs/2511.20594v1
- Date: Tue, 25 Nov 2025 18:24:17 GMT
- Title: Variational bagging: a robust approach for Bayesian uncertainty quantification
- Authors: Shitao Fan, Ilsang Ohn, David Dunson, Lizhen Lin,
- Abstract summary: We introduce a variational bagging approach that integrates a bagging procedure with variational Bayes.<n>We establish strong theoretical guarantees, including posterior contraction rates for general models.<n>We illustrate our variational bagging method in numerical studies through applications to parametric models, finite mixture models, deep neural networks, and variational autoencoders (VAEs)
- Score: 3.1932150827796675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Variational Bayes methods are popular due to their computational efficiency and adaptability to diverse applications. In specifying the variational family, mean-field classes are commonly used, which enables efficient algorithms such as coordinate ascent variational inference (CAVI) but fails to capture parameter dependence and typically underestimates uncertainty. In this work, we introduce a variational bagging approach that integrates a bagging procedure with variational Bayes, resulting in a bagged variational posterior for improved inference. We establish strong theoretical guarantees, including posterior contraction rates for general models and a Bernstein-von Mises (BVM) type theorem that ensures valid uncertainty quantification. Notably, our results show that even when using a mean-field variational family, our approach can recover off-diagonal elements of the limiting covariance structure and provide proper uncertainty quantification. In addition, variational bagging is robust to model misspecification, with covariance structures matching those of the target covariance. We illustrate our variational bagging method in numerical studies through applications to parametric models, finite mixture models, deep neural networks, and variational autoencoders (VAEs).
Related papers
- Mean-field Variational Bayes for Sparse Probit Regression [0.9023847175654603]
We consider Bayesian variable selection for binary outcomes under a probit link with a spike-and-slab prior to the regression coefficients.<n>Motivated by the computational challenges encountered by Markov chain Monte Carlo samplers in high-dimensional regimes, we develop a mean-field variational Bayes approximation.
arXiv Detail & Related papers (2026-01-29T14:16:31Z) - Fast Variational Bayes for Large Spatial Data [0.0]
We introduce spVarBayes, a suite of fast variational Bayesian approaches for large-scale geospatial data analysis using NNGP.<n>We replace auto-differentiation with a combination of calculus of variations, closed-form gradient updates, and linear response corrections for improved variance estimation.<n> Simulation experiments demonstrate that we achieve comparable accuracy to spNNGP but with reduced computational costs, and considerably outperform existing variational inference methods in terms of both accuracy and speed.
arXiv Detail & Related papers (2025-07-16T13:59:27Z) - Variational Bayesian surrogate modelling with application to robust design optimisation [0.9626666671366836]
Surrogate models provide a quick-to-evaluate approximation to complex computational models.
We consider Bayesian inference for constructing statistical surrogates with input uncertainties and dimensionality reduction.
We demonstrate intrinsic and robust structural optimisation problems where cost functions depend on a weighted sum of the mean and standard deviation of model outputs.
arXiv Detail & Related papers (2024-04-23T09:22:35Z) - Ensemble Kalman Filtering Meets Gaussian Process SSM for Non-Mean-Field and Online Inference [47.460898983429374]
We introduce an ensemble Kalman filter (EnKF) into the non-mean-field (NMF) variational inference framework to approximate the posterior distribution of the latent states.
This novel marriage between EnKF and GPSSM not only eliminates the need for extensive parameterization in learning variational distributions, but also enables an interpretable, closed-form approximation of the evidence lower bound (ELBO)
We demonstrate that the resulting EnKF-aided online algorithm embodies a principled objective function by ensuring data-fitting accuracy while incorporating model regularizations to mitigate overfitting.
arXiv Detail & Related papers (2023-12-10T15:22:30Z) - Robust scalable initialization for Bayesian variational inference with
multi-modal Laplace approximations [0.0]
Variational mixtures with full-covariance structures suffer from a quadratic growth due to variational parameters with the number of parameters.
We propose a method for constructing an initial Gaussian model approximation that can be used to warm-start variational inference.
arXiv Detail & Related papers (2023-07-12T19:30:04Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Learning Counterfactually Invariant Predictors [11.682403472580162]
We propose a model-agnostic framework, called Counterfactually Invariant Prediction (CIP)
Our experimental results demonstrate the effectiveness of CIP in enforcing counterfactual invariance across various simulated and real-world datasets.
arXiv Detail & Related papers (2022-07-20T09:23:35Z) - A Variational Approach to Bayesian Phylogenetic Inference [7.251627034538359]
We present a variational framework for Bayesian phylogenetic analysis.
We train the variational approximation via Markov gradient ascent and adopt estimators for continuous and discrete variational parameters.
Experiments on a benchmark of challenging real data phylogenetic inference problems demonstrate the effectiveness and efficiency of our methods.
arXiv Detail & Related papers (2022-04-16T08:23:48Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Adaptive variational Bayes: Optimality, computation and applications [2.4022340214033915]
We propose a novel adaptive variational Bayes framework, which can operate on a collection of models.
We show that the adaptive variational Bayes attains optimal contraction rates adaptively under very general conditions.
arXiv Detail & Related papers (2021-09-07T17:18:28Z) - Evaluating Sensitivity to the Stick-Breaking Prior in Bayesian
Nonparametrics [85.31247588089686]
We show that variational Bayesian methods can yield sensitivities with respect to parametric and nonparametric aspects of Bayesian models.
We provide both theoretical and empirical support for our variational approach to Bayesian sensitivity analysis.
arXiv Detail & Related papers (2021-07-08T03:40:18Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.