Decoupling Shrinkage and Selection for the Bayesian Quantile Regression
- URL: http://arxiv.org/abs/2107.08498v1
- Date: Sun, 18 Jul 2021 17:22:33 GMT
- Title: Decoupling Shrinkage and Selection for the Bayesian Quantile Regression
- Authors: David Kohns and Tibor Szendrei
- Abstract summary: This paper extends the idea of decoupling shrinkage and sparsity for continuous priors to Bayesian Quantile Regression (BQR)
In the first step, we shrink the quantile regression posterior through state of the art continuous priors and in the second step, we sparsify the posterior through an efficient variant of the adaptive lasso.
Our procedure can be used to communicate to policymakers which variables drive downside risk to the macro economy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper extends the idea of decoupling shrinkage and sparsity for
continuous priors to Bayesian Quantile Regression (BQR). The procedure follows
two steps: In the first step, we shrink the quantile regression posterior
through state of the art continuous priors and in the second step, we sparsify
the posterior through an efficient variant of the adaptive lasso, the signal
adaptive variable selection (SAVS) algorithm. We propose a new variant of the
SAVS which automates the choice of penalisation through quantile specific
loss-functions that are valid in high dimensions. We show in large scale
simulations that our selection procedure decreases bias irrespective of the
true underlying degree of sparsity in the data, compared to the un-sparsified
regression posterior. We apply our two-step approach to a high dimensional
growth-at-risk (GaR) exercise. The prediction accuracy of the un-sparsified
posterior is retained while yielding interpretable quantile specific variable
selection results. Our procedure can be used to communicate to policymakers
which variables drive downside risk to the macro economy.
Related papers
- Refined Risk Bounds for Unbounded Losses via Transductive Priors [58.967816314671296]
We revisit the sequential variants of linear regression with the squared loss, classification problems with hinge loss, and logistic regression.
Our key tools are based on the exponential weights algorithm with carefully chosen transductive priors.
arXiv Detail & Related papers (2024-10-29T00:01:04Z) - Linear Regression Using Quantum Annealing with Continuous Variables [0.0]
The boson system facilitates the optimization of linear regression without resorting to discrete approximations.
The major benefit of our new approach is that it can ensure accuracy without increasing the number of qubits as long as the adiabatic condition is satisfied.
arXiv Detail & Related papers (2024-10-11T06:49:09Z) - Retire: Robust Expectile Regression in High Dimensions [3.9391041278203978]
Penalized quantile and expectile regression methods offer useful tools to detect heteroscedasticity in high-dimensional data.
We propose and study (penalized) robust expectile regression (retire)
We show that the proposed procedure can be efficiently solved by a semismooth Newton coordinate descent algorithm.
arXiv Detail & Related papers (2022-12-11T18:03:12Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Improved conformalized quantile regression [0.0]
Conformalized quantile regression is a procedure that inherits the advantages of conformal prediction and quantile regression.
We propose to cluster the explanatory variables weighted by their permutation importance with an optimized k-means and apply k conformal steps.
To show that this improved version outperforms the classic version of conformalized quantile regression and is more adaptive to heteroscedasticity, we extensively compare the prediction intervals of both in open datasets.
arXiv Detail & Related papers (2022-07-06T16:54:36Z) - High-dimensional regression with potential prior information on variable
importance [0.0]
We propose a simple scheme involving fitting a sequence of models indicated by the ordering.
We show that the computational cost for fitting all models when ridge regression is used is no more than for a single fit of ridge regression.
We describe a strategy for Lasso regression that makes use of previous fits to greatly speed up fitting the entire sequence of models.
arXiv Detail & Related papers (2021-09-23T10:34:37Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z) - Fast OSCAR and OWL Regression via Safe Screening Rules [97.28167655721766]
Ordered $L_1$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning.
Proximal gradient methods are used as standard approaches to solve OWL regression.
We propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure.
arXiv Detail & Related papers (2020-06-29T23:35:53Z) - Optimal Feature Manipulation Attacks Against Linear Regression [64.54500628124511]
In this paper, we investigate how to manipulate the coefficients obtained via linear regression by adding carefully designed poisoning data points to the dataset or modify the original data points.
Given the energy budget, we first provide the closed-form solution of the optimal poisoning data point when our target is modifying one designated regression coefficient.
We then extend the analysis to the more challenging scenario where the attacker aims to change one particular regression coefficient while making others to be changed as small as possible.
arXiv Detail & Related papers (2020-02-29T04:26:59Z) - Censored Quantile Regression Forest [81.9098291337097]
We develop a new estimating equation that adapts to censoring and leads to quantile score whenever the data do not exhibit censoring.
The proposed procedure named it censored quantile regression forest, allows us to estimate quantiles of time-to-event without any parametric modeling assumption.
arXiv Detail & Related papers (2020-01-08T23:20:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.