A Scalable Two Stage Approach to Computing Optimal Decision Sets
- URL: http://arxiv.org/abs/2102.01904v1
- Date: Wed, 3 Feb 2021 06:51:49 GMT
- Title: A Scalable Two Stage Approach to Computing Optimal Decision Sets
- Authors: Alexey Ignatiev, Edward Lam, Peter J. Stuckey, and Joao Marques-Silva
- Abstract summary: Rule-based models, such as decision trees, decision lists, and decision sets, are conventionally deemed to be the most interpretable.
Recent work uses propositional satisfiability (SAT) solving to generate minimum-size decision sets.
This paper proposes a novel approach to learn minimum-size decision sets by enumerating individual rules of the target decision set independently of each other, and then solving a set cover problem to select a subset of rules.
- Score: 29.946141040012545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) is ubiquitous in modern life. Since it is being
deployed in technologies that affect our privacy and safety, it is often
crucial to understand the reasoning behind its decisions, warranting the need
for explainable AI. Rule-based models, such as decision trees, decision lists,
and decision sets, are conventionally deemed to be the most interpretable.
Recent work uses propositional satisfiability (SAT) solving (and its
optimization variants) to generate minimum-size decision sets. Motivated by
limited practical scalability of these earlier methods, this paper proposes a
novel approach to learn minimum-size decision sets by enumerating individual
rules of the target decision set independently of each other, and then solving
a set cover problem to select a subset of rules. The approach makes use of
modern maximum satisfiability and integer linear programming technologies.
Experiments on a wide range of publicly available datasets demonstrate the
advantage of the new approach over the state of the art in SAT-based decision
set learning.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Tight Guarantees for Interactive Decision Making with the
Decision-Estimation Coefficient [51.37720227675476]
We introduce a new variant of the Decision-Estimation Coefficient, and use it to derive new lower bounds that improve upon prior work on three fronts.
We provide upper bounds on regret that scale with the same quantity, thereby closing all but one of the gaps between upper and lower bounds in Foster et al.
Our results apply to both the regret framework and PAC framework, and make use of several new analysis and algorithm design techniques that we anticipate will find broader use.
arXiv Detail & Related papers (2023-01-19T18:24:08Z) - Sample-Efficient Multi-Objective Learning via Generalized Policy
Improvement Prioritization [8.836422771217084]
Multi-objective reinforcement learning (MORL) algorithms tackle sequential decision problems where agents may have different preferences.
We introduce a novel algorithm that uses Generalized Policy Improvement (GPI) to define principled, formally-derived prioritization schemes.
We empirically show that our method outperforms state-of-the-art MORL algorithms in challenging multi-objective tasks.
arXiv Detail & Related papers (2023-01-18T20:54:40Z) - Efficient Learning of Interpretable Classification Rules [34.27987659227838]
This paper contributes an interpretable learning framework IMLI, that is based on maximum satisfiability (MaxSAT) for classification rules expressible in proposition logic.
In our experiments, IMLI achieves the best balance among prediction accuracy, interpretability, and scalability.
arXiv Detail & Related papers (2022-05-14T00:36:38Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Modularity in Reinforcement Learning via Algorithmic Independence in
Credit Assignment [79.5678820246642]
We show that certain action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.
We generalize the recently proposed societal decision-making framework as a more granular formalism than the Markov decision process.
arXiv Detail & Related papers (2021-06-28T21:29:13Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - Goal Seeking Quadratic Unconstrained Binary Optimization [0.5439020425819]
We present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
In this paper, we present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
arXiv Detail & Related papers (2021-03-24T03:03:13Z) - Optimal Decision Lists using SAT [39.32513817522866]
We show for the first time how to construct optimal "perfect" decision lists which are perfectly accurate on the training data.
We also give a new method for determining optimal sparse decision lists, which trade off size and accuracy.
arXiv Detail & Related papers (2020-10-19T23:33:42Z) - Computing Optimal Decision Sets with SAT [39.32513817522866]
Most explainable machine learning models use decision rules.
In order to be easy for humans to understand, these rules must be concise.
This paper focuses on decision sets, a type of model with unordered rules, which explains each prediction with a single rule.
arXiv Detail & Related papers (2020-07-29T22:35:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.