How Smart Guessing Strategies Can Yield Massive Scalability Improvements
for Sparse Decision Tree Optimization
- URL: http://arxiv.org/abs/2112.00798v1
- Date: Wed, 1 Dec 2021 19:39:28 GMT
- Title: How Smart Guessing Strategies Can Yield Massive Scalability Improvements
for Sparse Decision Tree Optimization
- Authors: Hayden McTavish, Chudi Zhong, Reto Achermann, Ilias Karimalis, Jacques
Chen, Cynthia Rudin, Margo Seltzer
- Abstract summary: Current algorithms often require impractical amounts of time and memory to find optimal or near-optimal trees for some real-world datasets.
We address this problem via smart guessing strategies that can be applied to any optimal branch-and-bound-based decision tree algorithm.
Our approach enables guesses about how to bin continuous features, the size of the tree, and lower bounds on the error for the optimal decision tree.
- Score: 18.294573939199438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse decision tree optimization has been one of the most fundamental
problems in AI since its inception and is a challenge at the core of
interpretable machine learning. Sparse decision tree optimization is
computationally hard, and despite steady effort since the 1960's, breakthroughs
have only been made on the problem within the past few years, primarily on the
problem of finding optimal sparse decision trees. However, current
state-of-the-art algorithms often require impractical amounts of computation
time and memory to find optimal or near-optimal trees for some real-world
datasets, particularly those having several continuous-valued features. Given
that the search spaces of these decision tree optimization problems are
massive, can we practically hope to find a sparse decision tree that competes
in accuracy with a black box machine learning model? We address this problem
via smart guessing strategies that can be applied to any optimal
branch-and-bound-based decision tree algorithm. We show that by using these
guesses, we can reduce the run time by multiple orders of magnitude, while
providing bounds on how far the resulting trees can deviate from the black
box's accuracy and expressive power. Our approach enables guesses about how to
bin continuous features, the size of the tree, and lower bounds on the error
for the optimal decision tree. Our experiments show that in many cases we can
rapidly construct sparse decision trees that match the accuracy of black box
models. To summarize: when you are having trouble optimizing, just guess.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.