TRBoost: A Generic Gradient Boosting Machine based on Trust-region
Method
- URL: http://arxiv.org/abs/2209.13791v4
- Date: Tue, 11 Apr 2023 07:32:38 GMT
- Title: TRBoost: A Generic Gradient Boosting Machine based on Trust-region
Method
- Authors: Jiaqi Luo, Zihao Wei, Junkai Man, Shixin Xu
- Abstract summary: This study proposes a new generic Gradient Boosting Machine called Trust-region Boosting (TRBoost)
In each iteration, TRBoost uses a constrained quadratic model to approximate the objective and applies the Trust-region algorithm to solve it and obtain a new learner.
convergence analysis and numerical experiments conducted in this study confirm that TRBoost is as general as first-order GBMs and yields competitive results.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gradient Boosting Machines (GBMs) have demonstrated remarkable success in
solving diverse problems by utilizing Taylor expansions in functional space.
However, achieving a balance between performance and generality has posed a
challenge for GBMs. In particular, gradient descent-based GBMs employ the
first-order Taylor expansion to ensure applicability to all loss functions,
while Newton's method-based GBMs use positive Hessian information to achieve
superior performance at the expense of generality. To address this issue, this
study proposes a new generic Gradient Boosting Machine called Trust-region
Boosting (TRBoost). In each iteration, TRBoost uses a constrained quadratic
model to approximate the objective and applies the Trust-region algorithm to
solve it and obtain a new learner. Unlike Newton's method-based GBMs, TRBoost
does not require the Hessian to be positive definite, thereby allowing it to be
applied to arbitrary loss functions while still maintaining competitive
performance similar to second-order algorithms. The convergence analysis and
numerical experiments conducted in this study confirm that TRBoost is as
general as first-order GBMs and yields competitive results compared to
second-order GBMs. Overall, TRBoost is a promising approach that balances
performance and generality, making it a valuable addition to the toolkit of
machine learning practitioners.
Related papers
- How to Boost Any Loss Function [63.573324901948716]
We show that loss functions can be efficiently optimized with boosting.
We show that boosting can achieve a feat not yet known to be possible in the classical $0th$ order setting.
arXiv Detail & Related papers (2024-07-02T14:08:23Z) - Boosting Gradient Ascent for Continuous DR-submodular Maximization [18.040022136474114]
Projected Gradient Ascent (PGA) is the most commonly used optimization scheme in machine learning and operations research areas.
We present a boosting technique which can efficiently improve the approximation guarantee of the standard PGA to emphoptimal with only small modifications on the objective function.
Our resulting variants of boosting PGA beat the previous standard PGA in several aspects such as approximation ratio and efficiency.
arXiv Detail & Related papers (2024-01-16T12:49:10Z) - Benchmarking state-of-the-art gradient boosting algorithms for
classification [0.0]
This work explores the use of gradient boosting in the context of classification.
Four popular implementations, including original GBM algorithm and selected state-of-the-art gradient boosting frameworks, have been compared.
An attempt was made to indicate a gradient boosting variant showing the right balance between effectiveness, reliability and ease of use.
arXiv Detail & Related papers (2023-05-26T17:06:15Z) - GBHT: Gradient Boosting Histogram Transform for Density Estimation [73.94900378709023]
We propose a density estimation algorithm called textitGradient Boosting Histogram Transform (GBHT)
We make the first attempt to theoretically explain why boosting can enhance the performance of its base learners for density estimation problems.
arXiv Detail & Related papers (2021-06-10T13:40:28Z) - Gradient Boosted Binary Histogram Ensemble for Large-scale Regression [60.16351608335641]
We propose a gradient boosting algorithm for large-scale regression problems called textitGradient Boosted Binary Histogram Ensemble (GBBHE) based on binary histogram partition and ensemble learning.
In the experiments, compared with other state-of-the-art algorithms such as gradient boosted regression tree (GBRT), our GBBHE algorithm shows promising performance with less running time on large-scale datasets.
arXiv Detail & Related papers (2021-06-03T17:05:40Z) - SnapBoost: A Heterogeneous Boosting Machine [9.532706363790053]
We study a Heterogeneous Newton Boosting Machine (HNBM) in which the base hypothesis class may vary across boosting iterations.
We describe how SnapBoost is implemented, with a focus on the training complexity.
We present experimental results, using OpenML and Kaggle datasets, that show that SnapBoost is able to achieve better generalization loss than competing boosting frameworks.
arXiv Detail & Related papers (2020-06-17T09:38:45Z) - Soft Gradient Boosting Machine [72.54062017726154]
We propose the soft Gradient Boosting Machine (sGBM) by wiring multiple differentiable base learners together.
Experimental results showed that, sGBM enjoys much higher time efficiency with better accuracy, given the same base learner in both on-line and off-line settings.
arXiv Detail & Related papers (2020-06-07T06:43:23Z) - Fully-Corrective Gradient Boosting with Squared Hinge: Fast Learning
Rates and Early Stopping [29.485528641599018]
We propose an efficient boosting method with theoretical generalization guarantees for binary classification.
We derive a fast learning rate of the order $cal O((m/log m)-1/4)$ for the proposed boosting method.
Both derived learning rates are the best ones among the existing generalization results of boosting-type methods for classification.
arXiv Detail & Related papers (2020-04-01T00:39:24Z) - Variance Reduction with Sparse Gradients [82.41780420431205]
Variance reduction methods such as SVRG and SpiderBoost use a mixture of large and small batch gradients.
We introduce a new sparsity operator: The random-top-k operator.
Our algorithm consistently outperforms SpiderBoost on various tasks including image classification, natural language processing, and sparse matrix factorization.
arXiv Detail & Related papers (2020-01-27T08:23:58Z) - On the Dual Formulation of Boosting Algorithms [92.74617630106559]
We show that the Lagrange problems of AdaBoost, LogitBoost and soft-marginBoost are all dual problems with generalized hinge loss entropy.
By looking at the dual problems of these boosting algorithms, we show that the success of boosting can be understood in terms of maintaining a better margin distribution.
arXiv Detail & Related papers (2009-01-23T02:14:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.