High-dimensional Bayesian Tobit regression for censored response with Horseshoe prior
- URL: http://arxiv.org/abs/2505.08288v1
- Date: Tue, 13 May 2025 07:05:27 GMT
- Title: High-dimensional Bayesian Tobit regression for censored response with Horseshoe prior
- Authors: The Tien Mai,
- Abstract summary: We propose a novel framework for high-dimensional Tobit regression that addresses both censoring and sparsity.<n>We establish posterior consistency and derive concentration rates under sparsity, providing the first theoretical results for Bayesian Tobit models in high dimensions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Censored response variables--where outcomes are only partially observed due to known bounds--arise in numerous scientific domains and present serious challenges for regression analysis. The Tobit model, a classical solution for handling left-censoring, has been widely used in economics and beyond. However, with the increasing prevalence of high-dimensional data, where the number of covariates exceeds the sample size, traditional Tobit methods become inadequate. While frequentist approaches for high-dimensional Tobit regression have recently been developed, notably through Lasso-based estimators, the Bayesian literature remains sparse and lacks theoretical guarantees. In this work, we propose a novel Bayesian framework for high-dimensional Tobit regression that addresses both censoring and sparsity. Our method leverages the Horseshoe prior to induce shrinkage and employs a data augmentation strategy to facilitate efficient posterior computation via Gibbs sampling. We establish posterior consistency and derive concentration rates under sparsity, providing the first theoretical results for Bayesian Tobit models in high dimensions. Numerical experiments show that our approach outperforms favorably with the recent Lasso-Tobit method. Our method is implemented in the R package tobitbayes, which can be found on Github.
Related papers
- Handling bounded response in high dimensions: a Horseshoe prior Bayesian Beta regression approach [0.0]
We propose a novel Bayesian approach for high-dimensional sparse Beta regression framework that employs a tempered posterior.<n>Our method is implemented in the R package betaregbayes" available on Github.
arXiv Detail & Related papers (2025-05-28T10:39:05Z) - A sparse PAC-Bayesian approach for high-dimensional quantile prediction [0.0]
This paper presents a novel probabilistic machine learning approach for high-dimensional quantile prediction.
It uses a pseudo-Bayesian framework with a scaled Student-t prior and Langevin Monte Carlo for efficient computation.
Its effectiveness is validated through simulations and real-world data, where it performs competitively against established frequentist and Bayesian techniques.
arXiv Detail & Related papers (2024-09-03T08:01:01Z) - Fast post-process Bayesian inference with Variational Sparse Bayesian Quadrature [13.36200518068162]
We propose the framework of post-process Bayesian inference as a means to obtain a quick posterior approximation from existing target density evaluations.<n>Within this framework, we introduce Variational Sparse Bayesian Quadrature (VSBQ), a method for post-process approximate inference for models with black-box and potentially noisy likelihoods.<n>We validate our method on challenging synthetic scenarios and real-world applications from computational neuroscience.
arXiv Detail & Related papers (2023-03-09T13:58:35Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Density Estimation with Autoregressive Bayesian Predictives [1.5771347525430772]
In the context of density estimation, the standard Bayesian approach is to target the posterior predictive.
We develop a novel parameterization of the bandwidth using an autoregressive neural network that maps the data into a latent space.
arXiv Detail & Related papers (2022-06-13T20:43:39Z) - Decoupling Shrinkage and Selection for the Bayesian Quantile Regression [0.0]
This paper extends the idea of decoupling shrinkage and sparsity for continuous priors to Bayesian Quantile Regression (BQR)
In the first step, we shrink the quantile regression posterior through state of the art continuous priors and in the second step, we sparsify the posterior through an efficient variant of the adaptive lasso.
Our procedure can be used to communicate to policymakers which variables drive downside risk to the macro economy.
arXiv Detail & Related papers (2021-07-18T17:22:33Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Spike and slab variational Bayes for high dimensional logistic
regression [5.371337604556311]
Variational Bayes (VB) is a popular scalable alternative to Markov chain Monte Carlo for Bayesian inference.
We provide non-asymptotic theoretical guarantees for the VB in both $ell$ and prediction loss for a sparse truth.
We confirm the improved performance of our VB algorithm over common sparse VB approaches in a numerical study.
arXiv Detail & Related papers (2020-10-22T12:49:58Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z) - Censored Quantile Regression Forest [81.9098291337097]
We develop a new estimating equation that adapts to censoring and leads to quantile score whenever the data do not exhibit censoring.
The proposed procedure named it censored quantile regression forest, allows us to estimate quantiles of time-to-event without any parametric modeling assumption.
arXiv Detail & Related papers (2020-01-08T23:20:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.