Robust Boosting for Regression Problems
- URL: http://arxiv.org/abs/2002.02054v2
- Date: Fri, 7 Aug 2020 21:09:17 GMT
- Title: Robust Boosting for Regression Problems
- Authors: Xiaomeng Ju, Mat\'ias Salibi\'an-Barrera
- Abstract summary: Gradient boosting algorithms construct a regression predictor using a linear combination of base learners''
The robust boosting algorithm is based on a two-stage approach, similar to boosting is done for robust linear regression.
When no atypical observations are present, the robust boosting approach works as well as a standard gradient boosting with a squared loss.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gradient boosting algorithms construct a regression predictor using a linear
combination of ``base learners''. Boosting also offers an approach to obtaining
robust non-parametric regression estimators that are scalable to applications
with many explanatory variables. The robust boosting algorithm is based on a
two-stage approach, similar to what is done for robust linear regression: it
first minimizes a robust residual scale estimator, and then improves it by
optimizing a bounded loss function. Unlike previous robust boosting proposals
this approach does not require computing an ad-hoc residual scale estimator in
each boosting iteration. Since the loss functions involved in this robust
boosting algorithm are typically non-convex, a reliable initialization step is
required, such as an L1 regression tree, which is also fast to compute. A
robust variable importance measure can also be calculated via a permutation
procedure. Thorough simulation studies and several data analyses show that,
when no atypical observations are present, the robust boosting approach works
as well as the standard gradient boosting with a squared loss. Furthermore,
when the data contain outliers, the robust boosting estimator outperforms the
alternatives in terms of prediction error and variable selection accuracy.
Related papers
- Stagewise Boosting Distributional Regression [0.0]
We propose a stagewise boosting-type algorithm for distributional regression.
We extend it with a novel regularization method, correlation filtering, to provide additional stability.
Besides the advantage of processing large datasets, the nature of the approximations can lead to better results.
arXiv Detail & Related papers (2024-05-28T15:40:39Z) - Distributed High-Dimensional Quantile Regression: Estimation Efficiency and Support Recovery [0.0]
We focus on distributed estimation and support recovery for high-dimensional linear quantile regression.
We transform the original quantile regression into the least-squares optimization.
An efficient algorithm is developed, which enjoys high computation and communication efficiency.
arXiv Detail & Related papers (2024-05-13T08:32:22Z) - Retire: Robust Expectile Regression in High Dimensions [3.9391041278203978]
Penalized quantile and expectile regression methods offer useful tools to detect heteroscedasticity in high-dimensional data.
We propose and study (penalized) robust expectile regression (retire)
We show that the proposed procedure can be efficiently solved by a semismooth Newton coordinate descent algorithm.
arXiv Detail & Related papers (2022-12-11T18:03:12Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Robust Regression Revisited: Acceleration and Improved Estimation Rates [25.54653340884806]
We study fast algorithms for statistical regression problems under the strong contamination model.
The goal is to approximately optimize a generalized linear model (GLM) given adversarially corrupted samples.
We present nearly-linear time algorithms for robust regression problems with improved runtime or estimation guarantees.
arXiv Detail & Related papers (2021-06-22T17:21:56Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Fast OSCAR and OWL Regression via Safe Screening Rules [97.28167655721766]
Ordered $L_1$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning.
Proximal gradient methods are used as standard approaches to solve OWL regression.
We propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure.
arXiv Detail & Related papers (2020-06-29T23:35:53Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z) - Variance Reduction with Sparse Gradients [82.41780420431205]
Variance reduction methods such as SVRG and SpiderBoost use a mixture of large and small batch gradients.
We introduce a new sparsity operator: The random-top-k operator.
Our algorithm consistently outperforms SpiderBoost on various tasks including image classification, natural language processing, and sparse matrix factorization.
arXiv Detail & Related papers (2020-01-27T08:23:58Z) - Statistical Inference for Model Parameters in Stochastic Gradient
Descent [45.29532403359099]
gradient descent coefficients (SGD) has been widely used in statistical estimation for large-scale data due to its computational and memory efficiency.
We investigate the problem of statistical inference of true model parameters based on SGD when the population loss function is strongly convex and satisfies certain conditions.
arXiv Detail & Related papers (2016-10-27T07:04:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.