Optimal Decision Lists using SAT
- URL: http://arxiv.org/abs/2010.09919v1
- Date: Mon, 19 Oct 2020 23:33:42 GMT
- Title: Optimal Decision Lists using SAT
- Authors: Jinqiang Yu, Alexey Ignatiev, Pierre Le Bodic, Peter J. Stuckey
- Abstract summary: We show for the first time how to construct optimal "perfect" decision lists which are perfectly accurate on the training data.
We also give a new method for determining optimal sparse decision lists, which trade off size and accuracy.
- Score: 39.32513817522866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decision lists are one of the most easily explainable machine learning
models. Given the renewed emphasis on explainable machine learning decisions,
this machine learning model is increasingly attractive, combining small size
and clear explainability. In this paper, we show for the first time how to
construct optimal "perfect" decision lists which are perfectly accurate on the
training data, and minimal in size, making use of modern SAT solving
technology. We also give a new method for determining optimal sparse decision
lists, which trade off size and accuracy. We contrast the size and test
accuracy of optimal decisions lists versus optimal decision sets, as well as
other state-of-the-art methods for determining optimal decision lists. We also
examine the size of average explanations generated by decision sets and
decision lists.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Decision-aid or Controller? Steering Human Decision Makers with
Algorithms [5.449173263947196]
We study a decision-aid algorithm that learns about the human decision maker and provides ''personalized recommendations'' to influence final decisions.
We discuss the potential applications of such algorithms and their social implications.
arXiv Detail & Related papers (2023-03-23T23:24:26Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Learning To Cut By Looking Ahead: Cutting Plane Selection via Imitation
Learning [80.45697245527019]
We show that a greedy selection rule explicitly looking ahead to select cuts that yield the best bound improvement delivers strong decisions for cut selection.
We propose a new neural architecture (NeuralCut) for imitation learning on the lookahead expert.
arXiv Detail & Related papers (2022-06-27T16:07:27Z) - Speed, Quality, and the Optimal Timing of Complex Decisions: Field
Evidence [0.0]
Move-by-move data provide exceptionally detailed and precise information about decision times and decision quality.
Results reveal that faster decisions are associated with better performance.
arXiv Detail & Related papers (2022-01-26T08:29:05Z) - (Machine) Learning to Improve the Empirical Performance of Discrete
Algorithms [0.0]
We train machine learning methods to select the optimal algorithm for given data without human expert opinion.
Our framework recommends various pivot rules that improve overall total performance over just using a fixed default pivot rule.
For the all-pairs shortest path problem, the models trained made a large improvement and our selection is on average.07 percent away from the optimal choice.
arXiv Detail & Related papers (2021-09-29T08:33:09Z) - A Scalable Two Stage Approach to Computing Optimal Decision Sets [29.946141040012545]
Rule-based models, such as decision trees, decision lists, and decision sets, are conventionally deemed to be the most interpretable.
Recent work uses propositional satisfiability (SAT) solving to generate minimum-size decision sets.
This paper proposes a novel approach to learn minimum-size decision sets by enumerating individual rules of the target decision set independently of each other, and then solving a set cover problem to select a subset of rules.
arXiv Detail & Related papers (2021-02-03T06:51:49Z) - Computing Optimal Decision Sets with SAT [39.32513817522866]
Most explainable machine learning models use decision rules.
In order to be easy for humans to understand, these rules must be concise.
This paper focuses on decision sets, a type of model with unordered rules, which explains each prediction with a single rule.
arXiv Detail & Related papers (2020-07-29T22:35:22Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.