Debiased Bayesian Inference for High-dimensional Regression Models
- URL: http://arxiv.org/abs/2512.09257v1
- Date: Wed, 10 Dec 2025 02:24:37 GMT
- Title: Debiased Bayesian Inference for High-dimensional Regression Models
- Authors: Qihui Chen, Zheng Fang, Ruixuan Liu,
- Abstract summary: We introduce a novel debiasing approach that corrects the bias for the entire posterior distribution.<n>We establish a new Bernstein-von Mises theorem that guarantees the frequent validity of the debiased posterior.
- Score: 8.361498779640419
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been significant progress in Bayesian inference based on sparsity-inducing (e.g., spike-and-slab and horseshoe-type) priors for high-dimensional regression models. The resulting posteriors, however, in general do not possess desirable frequentist properties, and the credible sets thus cannot serve as valid confidence sets even asymptotically. We introduce a novel debiasing approach that corrects the bias for the entire Bayesian posterior distribution. We establish a new Bernstein-von Mises theorem that guarantees the frequentist validity of the debiased posterior. We demonstrate the practical performance of our proposal through Monte Carlo simulations and two empirical applications in economics.
Related papers
- Bayesian Semiparametric Causal Inference: Targeted Doubly Robust Estimation of Treatment Effects [1.2833734915643464]
We propose a semiparametric Bayesian methodology for estimating the average treatment effect (ATE) within the potential outcomes framework.<n>Our method introduces a Bayesian debiasing procedure that corrects for bias arising from nuisance estimation.<n>Extensive simulations confirm the theoretical results, demonstrating accurate point estimation and credible intervals with nominal coverage.
arXiv Detail & Related papers (2025-11-19T22:15:04Z) - Predictively Oriented Posteriors [4.135680181585462]
We advocate a new statistical principle that combines the most desirable aspects of both parameter inference and density estimation.<n>PrO posteriors converge to the predictively optimal model average at rate $n-1/2$.<n>We show that PrO posteriors can be sampled from by evolving particles based on mean field Langevin dynamics.
arXiv Detail & Related papers (2025-10-02T11:33:26Z) - Generalization Certificates for Adversarially Robust Bayesian Linear Regression [16.3368950151084]
Adversarial robustness of machine learning models is critical to ensuring reliable performance under data perturbations.<n>Recent progress has been on point estimators, and this paper considers distributional predictors.<n>Experiments on real and synthetic datasets demonstrate the superior robustness of the derived adversarially robust posterior over Bayes posterior.
arXiv Detail & Related papers (2025-02-20T06:25:30Z) - In-Context Parametric Inference: Point or Distribution Estimators? [66.22308335324239]
We show that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.<n>Our experiments indicate that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.
arXiv Detail & Related papers (2025-02-17T10:00:24Z) - Reproducible Parameter Inference Using Bagged Posteriors [9.975422461924705]
Under model misspecification, it is known that Bayesian posteriors often do not properly quantify uncertainty about true or pseudo-true parameters.
We consider the probability that two confidence sets constructed from independent data sets have nonempty overlap.
We show that credible sets from the standard posterior can strongly violate this bound, particularly in high-dimensional settings.
arXiv Detail & Related papers (2023-11-03T16:28:16Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
Processes [52.92110730286403]
It is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions.
We prove that by tuning hyper parameters, the performance, as measured by the marginal likelihood, improves monotonically with the input dimension.
We also prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent.
arXiv Detail & Related papers (2022-10-14T08:09:33Z) - Posterior concentration and fast convergence rates for generalized
Bayesian learning [4.186575888568896]
We study the learning rate of generalized Bayes estimators in a general setting.
We prove that under the multi-scale Bernstein's condition, the generalized posterior distribution concentrates around the set of optimal hypotheses.
arXiv Detail & Related papers (2021-11-19T14:25:21Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.