Decision Trees for Decision-Making under the Predict-then-Optimize
Framework
- URL: http://arxiv.org/abs/2003.00360v2
- Date: Thu, 18 Jun 2020 01:06:31 GMT
- Title: Decision Trees for Decision-Making under the Predict-then-Optimize
Framework
- Authors: Adam N. Elmachtoub, Jason Cheuk Nam Liang, Ryan McNellis
- Abstract summary: We consider the use of decision trees for decision-making problems under the predict-then-optimize framework.
A natural loss function in this framework is to measure the suboptimality of the decisions induced by the predicted input parameters.
We propose a tractable methodology called SPO Trees (SPOTs) for training decision trees under this loss.
- Score: 7.842152902652215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the use of decision trees for decision-making problems under the
predict-then-optimize framework. That is, we would like to first use a decision
tree to predict unknown input parameters of an optimization problem, and then
make decisions by solving the optimization problem using the predicted
parameters. A natural loss function in this framework is to measure the
suboptimality of the decisions induced by the predicted input parameters, as
opposed to measuring loss using input parameter prediction error. This natural
loss function is known in the literature as the Smart Predict-then-Optimize
(SPO) loss, and we propose a tractable methodology called SPO Trees (SPOTs) for
training decision trees under this loss. SPOTs benefit from the
interpretability of decision trees, providing an interpretable segmentation of
contextual features into groups with distinct optimal solutions to the
optimization problem of interest. We conduct several numerical experiments on
synthetic and real data including the prediction of travel times for shortest
path problems and predicting click probabilities for news article
recommendation. We demonstrate on these datasets that SPOTs simultaneously
provide higher quality decisions and significantly lower model complexity than
other machine learning approaches (e.g., CART) trained to minimize prediction
error.
Related papers
- Learning Solutions of Stochastic Optimization Problems with Bayesian Neural Networks [4.202961704179733]
In many real-world settings, some of these parameters are unknown or uncertain.
Recent research focuses on predicting the value of unknown parameters using available contextual features.
We propose a novel framework that models uncertainty Neural Networks (BNNs) and propagates this uncertainty into the mathematical solver.
arXiv Detail & Related papers (2024-06-05T09:11:46Z) - Learning accurate and interpretable decision trees [27.203303726977616]
We develop approaches to design decision tree learning algorithms given repeated access to data from the same domain.
We study the sample complexity of tuning prior parameters in Bayesian decision tree learning, and extend our results to decision tree regression.
We also study the interpretability of the learned decision trees and introduce a data-driven approach for optimizing the explainability versus accuracy trade-off using decision trees.
arXiv Detail & Related papers (2024-05-24T20:10:10Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Decision-focused predictions via pessimistic bilevel optimization: a computational study [0.7499722271664147]
Uncertainty in optimization parameters is an important and longstanding challenge.
We build predictive models that measure a emphregret measure on decisions taken with them.
We show various computational techniques to achieve tractability.
arXiv Detail & Related papers (2023-12-29T15:05:00Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Regret Bounds and Experimental Design for Estimate-then-Optimize [9.340611077939828]
In practical applications, data is used to make decisions in two steps: estimation and optimization.
Errors in the estimation step can lead estimate-then-optimize to sub-optimal decisions.
We provide a novel bound on this regret for smooth and unconstrained optimization problems.
arXiv Detail & Related papers (2022-10-27T16:13:48Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Robust Optimal Classification Trees Against Adversarial Examples [5.254093731341154]
We propose a collection of methods to train decision trees that are optimally robust against user-specified attack models.
We show that the min-max optimization problem that arises in adversarial learning can be solved using a single minimization formulation.
We also present a method that determines the upper bound on adversarial accuracy for any model using bipartite matching.
arXiv Detail & Related papers (2021-09-08T18:10:49Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - Stochastic Optimization Forests [60.523606291705214]
We show how to train forest decision policies by growing trees that choose splits to directly optimize the downstream decision quality, rather than splitting to improve prediction accuracy as in the standard random forest algorithm.
We show that our approximate splitting criteria can reduce running time hundredfold, while achieving performance close to forest algorithms that exactly re-optimize for every candidate split.
arXiv Detail & Related papers (2020-08-17T16:56:06Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.