ENTMOOT: A Framework for Optimization over Ensemble Tree Models
- URL: http://arxiv.org/abs/2003.04774v3
- Date: Tue, 18 May 2021 15:10:36 GMT
- Title: ENTMOOT: A Framework for Optimization over Ensemble Tree Models
- Authors: Alexander Thebelt, Jan Kronqvist, Miten Mistry, Robert M. Lee, Nathan
Sudermann-Merx, Ruth Misener
- Abstract summary: ENTMOOT is a framework for integrating tree models into larger optimization problems.
We show how ENTMOOT allows a simple integration of tree models into decision-making and black-box optimization.
- Score: 57.98561336670884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gradient boosted trees and other regression tree models perform well in a
wide range of real-world, industrial applications. These tree models (i) offer
insight into important prediction features, (ii) effectively manage sparse
data, and (iii) have excellent prediction capabilities. Despite their
advantages, they are generally unpopular for decision-making tasks and
black-box optimization, which is due to their difficult-to optimize structure
and the lack of a reliable uncertainty measure. ENTMOOT is our new framework
for integrating (already trained) tree models into larger optimization
problems. The contributions of ENTMOOT include: (i) explicitly introducing a
reliable uncertainty measure that is compatible with tree models, (ii) solving
the larger optimization problems that incorporate these uncertainty aware tree
models, (iii) proving that the solutions are globally optimal, i.e. no better
solution exists. In particular, we show how the ENTMOOT approach allows a
simple integration of tree models into decision-making and black-box
optimization, where it proves as a strong competitor to commonly-used
frameworks.
Related papers
- Global Optimization: A Machine Learning Approach [7.052596485478637]
Bertsimas and Ozturk (2023) proposed OCTHaGOn as a way of solving black-box global optimization problems.
We provide extensions to this approach by approximating the original problem using other MIO-representable ML models.
We show improvements in solution feasibility and optimality in the majority of instances.
arXiv Detail & Related papers (2023-11-03T06:33:38Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - bsnsing: A decision tree induction method based on recursive optimal
boolean rule composition [2.28438857884398]
This paper proposes a new mixed-integer programming (MIP) formulation to optimize split rule selection in the decision tree induction process.
It develops an efficient search solver that is able to solve practical instances faster than commercial solvers.
arXiv Detail & Related papers (2022-05-30T17:13:57Z) - Robust Optimal Classification Trees Against Adversarial Examples [5.254093731341154]
We propose a collection of methods to train decision trees that are optimally robust against user-specified attack models.
We show that the min-max optimization problem that arises in adversarial learning can be solved using a single minimization formulation.
We also present a method that determines the upper bound on adversarial accuracy for any model using bipartite matching.
arXiv Detail & Related papers (2021-09-08T18:10:49Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.