Selective Cascade of Residual ExtraTrees
- URL: http://arxiv.org/abs/2009.14138v1
- Date: Tue, 29 Sep 2020 16:31:37 GMT
- Title: Selective Cascade of Residual ExtraTrees
- Authors: Qimin Liu and Fang Liu
- Abstract summary: We propose a novel tree-based ensemble method named Selective Cascade of Residual ExtraTrees (SCORE)
SCORE draws inspiration from representation learning, incorporates regularized regression with variable selection features, and utilizes boosting to improve prediction and reduce generalization errors.
Our computer experiments show that SCORE provides comparable or superior performance in prediction against ExtraTrees, random forest, gradient boosting machine, and neural networks.
- Score: 3.6575928994425735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel tree-based ensemble method named Selective Cascade of
Residual ExtraTrees (SCORE). SCORE draws inspiration from representation
learning, incorporates regularized regression with variable selection features,
and utilizes boosting to improve prediction and reduce generalization errors.
We also develop a variable importance measure to increase the explainability of
SCORE. Our computer experiments show that SCORE provides comparable or superior
performance in prediction against ExtraTrees, random forest, gradient boosting
machine, and neural networks; and the proposed variable importance measure for
SCORE is comparable to studied benchmark methods. Finally, the predictive
performance of SCORE remains stable across hyper-parameter values, suggesting
potential robustness to hyperparameter specification.
Related papers
- Improving Tree Probability Estimation with Stochastic Optimization and Variance Reduction [11.417249588622926]
Subsplit Bayesian networks (SBNs) provide a powerful probabilistic graphical model for tree probability estimation.
The expectation (EM) method currently used for learning SBN parameters does not scale up to large data sets.
We introduce several computationally efficient methods for training SBNs and show that variance reduction could be the key for better performance.
arXiv Detail & Related papers (2024-09-09T02:22:52Z) - Forecasting with Hyper-Trees [50.72190208487953]
Hyper-Trees are designed to learn the parameters of time series models.
By relating the parameters of a target time series model to features, Hyper-Trees also address the issue of parameter non-stationarity.
In this novel approach, the trees first generate informative representations from the input features, which a shallow network then maps to the target model parameters.
arXiv Detail & Related papers (2024-05-13T15:22:15Z) - Prediction-Oriented Bayesian Active Learning [51.426960808684655]
Expected predictive information gain (EPIG) is an acquisition function that measures information gain in the space of predictions rather than parameters.
EPIG leads to stronger predictive performance compared with BALD across a range of datasets and models.
arXiv Detail & Related papers (2023-04-17T10:59:57Z) - Stability and Generalization Analysis of Gradient Methods for Shallow
Neural Networks [59.142826407441106]
We study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability.
We consider gradient descent (GD) and gradient descent (SGD) to train SNNs, for both of which we develop consistent excess bounds.
arXiv Detail & Related papers (2022-09-19T18:48:00Z) - Orthogonal Stochastic Configuration Networks with Adaptive Construction
Parameter for Data Analytics [6.940097162264939]
randomness makes SCNs more likely to generate approximate linear correlative nodes that are redundant and low quality.
In light of a fundamental principle in machine learning, that is, a model with fewer parameters holds improved generalization.
This paper proposes orthogonal SCN, termed OSCN, to filtrate out the low-quality hidden nodes for network structure reduction.
arXiv Detail & Related papers (2022-05-26T07:07:26Z) - On Uncertainty Estimation by Tree-based Surrogate Models in Sequential
Model-based Optimization [13.52611859628841]
We revisit various ensembles of randomized trees to investigate their behavior in the perspective of prediction uncertainty estimation.
We propose a new way of constructing an ensemble of randomized trees, referred to as BwO forest, where bagging with oversampling is employed to construct bootstrapped samples.
Experimental results demonstrate the validity and good performance of BwO forest over existing tree-based models in various circumstances.
arXiv Detail & Related papers (2022-02-22T04:50:37Z) - No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for
Training Large Transformer Models [132.90062129639705]
We propose a novel training strategy that encourages all parameters to be trained sufficiently.
A parameter with low sensitivity is redundant, and we improve its fitting by increasing its learning rate.
In contrast, a parameter with high sensitivity is well-trained and we regularize it by decreasing its learning rate to prevent further overfitting.
arXiv Detail & Related papers (2022-02-06T00:22:28Z) - Cluster Regularization via a Hierarchical Feature Regression [0.0]
This paper proposes a novel cluster-based regularization - the hierarchical feature regression (HFR)
It mobilizes insights from the domains of machine learning and graph theory to estimate parameters along a supervised hierarchical representation of the predictor set.
An application to the prediction of economic growth is used to illustrate the HFR's effectiveness in an empirical setting.
arXiv Detail & Related papers (2021-07-10T13:03:01Z) - An Asymptotically Optimal Multi-Armed Bandit Algorithm and
Hyperparameter Optimization [48.5614138038673]
We propose an efficient and robust bandit-based algorithm called Sub-Sampling (SS) in the scenario of hyper parameter search evaluation.
We also develop a novel hyper parameter optimization algorithm called BOSS.
Empirical studies validate our theoretical arguments of SS and demonstrate the superior performance of BOSS on a number of applications.
arXiv Detail & Related papers (2020-07-11T03:15:21Z) - Multivariate Boosted Trees and Applications to Forecasting and Control [0.0]
Gradient boosted trees are non-parametric regressors that exploit sequential model fitting and gradient descent to minimize a specific loss function.
In this paper, we present a computationally efficient algorithm for fitting multivariate boosted trees.
arXiv Detail & Related papers (2020-03-08T19:26:59Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.