A novel gradient-based method for decision trees optimizing arbitrary differential loss functions
- URL: http://arxiv.org/abs/2503.17855v1
- Date: Sat, 22 Mar 2025 20:25:30 GMT
- Title: A novel gradient-based method for decision trees optimizing arbitrary differential loss functions
- Authors: Andrei V. Konstantinov, Lev V. Utkin,
- Abstract summary: This work introduces a novel method for constructing gradient-based decision trees that optimize arbitrary differentiable loss functions.<n>We demonstrate the method's applicability to classification, regression, and survival analysis tasks.<n>The implementation of the method is publicly available, providing a practical tool for researchers and practitioners.
- Score: 2.4861619769660637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are many approaches for training decision trees. This work introduces a novel gradient-based method for constructing decision trees that optimize arbitrary differentiable loss functions, overcoming the limitations of heuristic splitting rules. Unlike traditional approaches that rely on heuristic splitting rules, the proposed method refines predictions using the first and second derivatives of the loss function, enabling the optimization of complex tasks such as classification, regression, and survival analysis. We demonstrate the method's applicability to classification, regression, and survival analysis tasks, including those with censored data. Numerical experiments on both real and synthetic datasets compare the proposed method with traditional decision tree algorithms, such as CART, Extremely Randomized Trees, and SurvTree. The implementation of the method is publicly available, providing a practical tool for researchers and practitioners. This work advances the field of decision tree-based modeling, offering a more flexible and accurate approach for handling structured data and complex tasks. By leveraging gradient-based optimization, the proposed method bridges the gap between traditional decision trees and modern machine learning techniques, paving the way for further innovations in interpretable and high-performing models.
Related papers
- Learning Decision Trees as Amortized Structure Inference [59.65621207449269]
We propose a hybrid amortized structure inference approach to learn predictive decision tree ensembles given data.
We show that our approach, DT-GFN, outperforms state-of-the-art decision tree and deep learning methods on standard classification benchmarks.
arXiv Detail & Related papers (2025-03-10T07:05:07Z) - Learning accurate and interpretable decision trees [27.203303726977616]
We develop approaches to design decision tree learning algorithms given repeated access to data from the same domain.
We study the sample complexity of tuning prior parameters in Bayesian decision tree learning, and extend our results to decision tree regression.
We also study the interpretability of the learned decision trees and introduce a data-driven approach for optimizing the explainability versus accuracy trade-off using decision trees.
arXiv Detail & Related papers (2024-05-24T20:10:10Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Neural Improvement Heuristics for Graph Combinatorial Optimization
Problems [49.85111302670361]
We introduce a novel Neural Improvement (NI) model capable of handling graph-based problems where information is encoded in the nodes, edges, or both.
The presented model serves as a fundamental component for hill-climbing-based algorithms that guide the selection of neighborhood operations for each.
arXiv Detail & Related papers (2022-06-01T10:35:29Z) - Robust Optimal Classification Trees Against Adversarial Examples [5.254093731341154]
We propose a collection of methods to train decision trees that are optimally robust against user-specified attack models.
We show that the min-max optimization problem that arises in adversarial learning can be solved using a single minimization formulation.
We also present a method that determines the upper bound on adversarial accuracy for any model using bipartite matching.
arXiv Detail & Related papers (2021-09-08T18:10:49Z) - Genetic Adversarial Training of Decision Trees [6.85316573653194]
We put forward a novel learning methodology for ensembles of decision trees based on a genetic algorithm which is able to train a decision tree for maximizing its accuracy and its robustness to adversarial perturbations.
We implemented this genetic adversarial training algorithm in a tool called Meta-Silvae (MS) and we experimentally evaluated it on some reference datasets used in adversarial training.
arXiv Detail & Related papers (2020-12-21T14:05:57Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z) - Evolutionary algorithms for constructing an ensemble of decision trees [0.0]
We propose several methods for induction of decision trees and their ensembles based on evolutionary algorithms.
The main difference of our approach is using real-valued vector representation of decision tree.
We test the predictive performance of this methods using several public UCI data sets.
arXiv Detail & Related papers (2020-02-03T13:38:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.