Better Trees: An empirical study on hyperparameter tuning of
classification decision tree induction algorithms
- URL: http://arxiv.org/abs/1812.02207v3
- Date: Thu, 21 Dec 2023 21:16:41 GMT
- Title: Better Trees: An empirical study on hyperparameter tuning of
classification decision tree induction algorithms
- Authors: Rafael Gomes Mantovani, Tom\'a\v{s} Horv\'ath, Andr\'e L. D. Rossi,
Ricardo Cerri, Sylvio Barbon Junior, Joaquin Vanschoren, Andr\'e Carlos Ponce
de Leon Ferreira de Carvalho
- Abstract summary: Decision Tree (DT) induction algorithms present high predictive performance and interpretable classification models.
This paper investigates the effects of hyperparameter tuning for the two DT induction algorithms most often used, CART and C4.5.
Experiments were carried out with different tuning strategies to induce models and to evaluate HPs' relevance using 94 classification datasets from OpenML.
- Score: 5.4611430411491115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning algorithms often contain many hyperparameters (HPs) whose
values affect the predictive performance of the induced models in intricate
ways. Due to the high number of possibilities for these HP configurations and
their complex interactions, it is common to use optimization techniques to find
settings that lead to high predictive performance. However, insights into
efficiently exploring this vast space of configurations and dealing with the
trade-off between predictive and runtime performance remain challenging.
Furthermore, there are cases where the default HPs fit the suitable
configuration. Additionally, for many reasons, including model validation and
attendance to new legislation, there is an increasing interest in interpretable
models, such as those created by the Decision Tree (DT) induction algorithms.
This paper provides a comprehensive approach for investigating the effects of
hyperparameter tuning for the two DT induction algorithms most often used, CART
and C4.5. DT induction algorithms present high predictive performance and
interpretable classification models, though many HPs need to be adjusted.
Experiments were carried out with different tuning strategies to induce models
and to evaluate HPs' relevance using 94 classification datasets from OpenML.
The experimental results point out that different HP profiles for the tuning of
each algorithm provide statistically significant improvements in most of the
datasets for CART, but only in one-third for C4.5. Although different
algorithms may present different tuning scenarios, the tuning techniques
generally required few evaluations to find accurate solutions. Furthermore, the
best technique for all the algorithms was the IRACE. Finally, we found out that
tuning a specific small subset of HPs is a good alternative for achieving
optimal predictive performance.
Related papers
- A Comparative Study of Hyperparameter Tuning Methods [0.0]
Tree-structured Parzen Estimator (TPE), Genetic Search, and Random Search are evaluated across regression and classification tasks.
Random Search excelled in regression tasks, while TPE was more effective for classification tasks.
arXiv Detail & Related papers (2024-08-29T10:35:07Z) - Tune As You Scale: Hyperparameter Optimization For Compute Efficient
Training [0.0]
We propose a practical method for robustly tuning large models.
CarBS performs local search around the performance-cost frontier.
Among our results, we effectively solve the entire ProcGen benchmark just by tuning a simple baseline.
arXiv Detail & Related papers (2023-06-13T18:22:24Z) - Behavior of Hyper-Parameters for Selected Machine Learning Algorithms:
An Empirical Investigation [3.441021278275805]
Hyper- Parameters (HPs) are an important part of machine learning (ML) model development and can greatly influence performance.
This paper studies their behavior for three algorithms: Extreme Gradient Boosting (XGB), Random Forest (RF), and Feedforward Neural Network (FFNN) with structured data.
Our empirical investigation examines the qualitative behavior of model performance as the HPs vary, quantifies the importance of each HP for different ML algorithms, and stability of the performance near the optimal region.
arXiv Detail & Related papers (2022-11-15T22:14:52Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Hyperparameter Sensitivity in Deep Outlier Detection: Analysis and a
Scalable Hyper-Ensemble Solution [21.130842136324528]
We conduct the first large-scale analysis on the HP sensitivity of deep OD methods.
We design a HP-robust and scalable deep hyper-ensemble model called ROBOD that assembles models with varying HP configurations.
arXiv Detail & Related papers (2022-06-15T16:46:00Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Genealogical Population-Based Training for Hyperparameter Optimization [1.0514231683620516]
We experimentally demonstrate that our method cuts down by 2 to 3 times the computational cost required.
Our method is search-algorithm so that the inner search routine can be any search algorithm like TPE, GP, CMA or random search.
arXiv Detail & Related papers (2021-09-30T08:49:41Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.