How Smart Guessing Strategies Can Yield Massive Scalability Improvements
for Sparse Decision Tree Optimization
- URL: http://arxiv.org/abs/2112.00798v1
- Date: Wed, 1 Dec 2021 19:39:28 GMT
- Title: How Smart Guessing Strategies Can Yield Massive Scalability Improvements
for Sparse Decision Tree Optimization
- Authors: Hayden McTavish, Chudi Zhong, Reto Achermann, Ilias Karimalis, Jacques
Chen, Cynthia Rudin, Margo Seltzer
- Abstract summary: Current algorithms often require impractical amounts of time and memory to find optimal or near-optimal trees for some real-world datasets.
We address this problem via smart guessing strategies that can be applied to any optimal branch-and-bound-based decision tree algorithm.
Our approach enables guesses about how to bin continuous features, the size of the tree, and lower bounds on the error for the optimal decision tree.
- Score: 18.294573939199438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse decision tree optimization has been one of the most fundamental
problems in AI since its inception and is a challenge at the core of
interpretable machine learning. Sparse decision tree optimization is
computationally hard, and despite steady effort since the 1960's, breakthroughs
have only been made on the problem within the past few years, primarily on the
problem of finding optimal sparse decision trees. However, current
state-of-the-art algorithms often require impractical amounts of computation
time and memory to find optimal or near-optimal trees for some real-world
datasets, particularly those having several continuous-valued features. Given
that the search spaces of these decision tree optimization problems are
massive, can we practically hope to find a sparse decision tree that competes
in accuracy with a black box machine learning model? We address this problem
via smart guessing strategies that can be applied to any optimal
branch-and-bound-based decision tree algorithm. We show that by using these
guesses, we can reduce the run time by multiple orders of magnitude, while
providing bounds on how far the resulting trees can deviate from the black
box's accuracy and expressive power. Our approach enables guesses about how to
bin continuous features, the size of the tree, and lower bounds on the error
for the optimal decision tree. Our experiments show that in many cases we can
rapidly construct sparse decision trees that match the accuracy of black box
models. To summarize: when you are having trouble optimizing, just guess.
Related papers
- Can a Single Tree Outperform an Entire Forest? [5.448070998907116]
The prevailing mindset is that a single decision tree underperforms classic random forests in testing accuracy.
This study challenges such a mindset by significantly improving the testing accuracy of an oblique regression tree.
Our approach reformulates tree training as a differentiable unconstrained optimization task.
arXiv Detail & Related papers (2024-11-26T00:18:18Z) - Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - Quant-BnB: A Scalable Branch-and-Bound Method for Optimal Decision Trees
with Continuous Features [5.663538370244174]
We present a new discrete optimization method based on branch-and-bound (BnB) to obtain optimal decision trees.
Our proposed algorithm Quant-BnB shows significant speedups compared to existing approaches for shallow optimal trees on various real datasets.
arXiv Detail & Related papers (2022-06-23T17:19:29Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z) - ENTMOOT: A Framework for Optimization over Ensemble Tree Models [57.98561336670884]
ENTMOOT is a framework for integrating tree models into larger optimization problems.
We show how ENTMOOT allows a simple integration of tree models into decision-making and black-box optimization.
arXiv Detail & Related papers (2020-03-10T14:34:07Z) - Optimal Sparse Decision Trees [25.043477914272046]
This work introduces the first practical algorithm for optimal decision trees for binary variables.
The algorithm is a co-design of analytical bounds that reduce the search space and modern systems techniques.
Our experiments highlight advantages in scalability, speed, and proof of optimality.
arXiv Detail & Related papers (2019-04-29T17:56:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.