Handling bounded response in high dimensions: a Horseshoe prior Bayesian Beta regression approach
- URL: http://arxiv.org/abs/2505.22211v1
- Date: Wed, 28 May 2025 10:39:05 GMT
- Title: Handling bounded response in high dimensions: a Horseshoe prior Bayesian Beta regression approach
- Authors: The Tien Mai,
- Abstract summary: We propose a novel Bayesian approach for high-dimensional sparse Beta regression framework that employs a tempered posterior.<n>Our method is implemented in the R package betaregbayes" available on Github.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Bounded continuous responses -- such as proportions -- arise frequently in diverse scientific fields including climatology, biostatistics, and finance. Beta regression is a widely adopted framework for modeling such data, due to the flexibility of the Beta distribution over the unit interval. While Bayesian extensions of Beta regression have shown promise, existing methods are limited to low-dimensional settings and lack theoretical guarantees. In this work, we propose a novel Bayesian approach for high-dimensional sparse Beta regression framework that employs a tempered posterior. Our method incorporates the Horseshoe prior for effective shrinkage and variable selection. Most notable, we propose a novel Gibbs sampling algorithm using P\'olya-Gamma augmentation for efficient inference in Beta regression model. We also provide the first theoretical results establishing posterior consistency and convergence rates for Bayesian Beta regression. Through extensive simulation studies in both low- and high-dimensional scenarios, we demonstrate that our approach outperforms existing alternatives, offering improved estimation accuracy and model interpretability. Our method is implemented in the R package ``betaregbayes" available on Github.
Related papers
- Lasso Penalization for High-Dimensional Beta Regression Models: Computation, Analysis, and Inference [3.330229314824914]
We develop a framework for non-asymptotic predictors with a negative log-likelihood function.<n>A gradient is devised for optimizing the resulting penalized negative log-likelihood function.<n>Our theoretical analysis is corroborated via simulation, and a real data example concerning the prediction of county-level incarceration is presented.
arXiv Detail & Related papers (2025-07-26T23:19:17Z) - High-dimensional Bayesian Tobit regression for censored response with Horseshoe prior [0.0]
We propose a novel framework for high-dimensional Tobit regression that addresses both censoring and sparsity.<n>We establish posterior consistency and derive concentration rates under sparsity, providing the first theoretical results for Bayesian Tobit models in high dimensions.
arXiv Detail & Related papers (2025-05-13T07:05:27Z) - Bayesian Circular Regression with von Mises Quasi-Processes [57.88921637944379]
In this work we explore a family of expressive and interpretable distributions over circle-valued random functions.<n>For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Gibbs sampling.<n>We present experiments applying this model to the prediction of wind directions and the percentage of the running gait cycle as a function of joint angles.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Fast post-process Bayesian inference with Variational Sparse Bayesian Quadrature [13.36200518068162]
We propose the framework of post-process Bayesian inference as a means to obtain a quick posterior approximation from existing target density evaluations.<n>Within this framework, we introduce Variational Sparse Bayesian Quadrature (VSBQ), a method for post-process approximate inference for models with black-box and potentially noisy likelihoods.<n>We validate our method on challenging synthetic scenarios and real-world applications from computational neuroscience.
arXiv Detail & Related papers (2023-03-09T13:58:35Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - A flexible empirical Bayes approach to multiple linear regression and connections with penalized regression [8.663322701649454]
We introduce a new empirical Bayes approach for large-scale multiple linear regression.
Our approach combines two key ideas: the use of flexible "adaptive shrinkage" priors and variational approximations.
We show that the posterior mean from our method solves a penalized regression problem.
arXiv Detail & Related papers (2022-08-23T12:42:57Z) - Density Estimation with Autoregressive Bayesian Predictives [1.5771347525430772]
In the context of density estimation, the standard Bayesian approach is to target the posterior predictive.
We develop a novel parameterization of the bandwidth using an autoregressive neural network that maps the data into a latent space.
arXiv Detail & Related papers (2022-06-13T20:43:39Z) - Quasi Black-Box Variational Inference with Natural Gradients for
Bayesian Learning [84.90242084523565]
We develop an optimization algorithm suitable for Bayesian learning in complex models.
Our approach relies on natural gradient updates within a general black-box framework for efficient training with limited model-specific derivations.
arXiv Detail & Related papers (2022-05-23T18:54:27Z) - Human Pose Regression with Residual Log-likelihood Estimation [48.30425850653223]
We propose a novel regression paradigm with Residual Log-likelihood Estimation (RLE) to capture the underlying output distribution.
RLE learns the change of the distribution instead of the unreferenced underlying distribution to facilitate the training process.
Compared to the conventional regression paradigm, regression with RLE bring 12.4 mAP improvement on MSCOCO without any test-time overhead.
arXiv Detail & Related papers (2021-07-23T15:06:31Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.