Computing Optimal Decision Sets with SAT
- URL: http://arxiv.org/abs/2007.15140v1
- Date: Wed, 29 Jul 2020 22:35:22 GMT
- Title: Computing Optimal Decision Sets with SAT
- Authors: Jinqiang Yu, Alexey Ignatiev, Peter J. Stuckey, Pierre Le Bodic
- Abstract summary: Most explainable machine learning models use decision rules.
In order to be easy for humans to understand, these rules must be concise.
This paper focuses on decision sets, a type of model with unordered rules, which explains each prediction with a single rule.
- Score: 39.32513817522866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning is increasingly used to help make decisions, there is a
demand for these decisions to be explainable. Arguably, the most explainable
machine learning models use decision rules. This paper focuses on decision
sets, a type of model with unordered rules, which explains each prediction with
a single rule. In order to be easy for humans to understand, these rules must
be concise. Earlier work on generating optimal decision sets first minimizes
the number of rules, and then minimizes the number of literals, but the
resulting rules can often be very large. Here we consider a better measure,
namely the total size of the decision set in terms of literals. So we are not
driven to a small set of rules which require a large number of literals. We
provide the first approach to determine minimum-size decision sets that achieve
minimum empirical risk and then investigate sparse alternatives where we trade
accuracy for size. By finding optimal solutions we show we can build decision
set classifiers that are almost as accurate as the best heuristic methods, but
far more concise, and hence more explainable.
Related papers
- Minimax-Bayes Reinforcement Learning [2.7456483236562437]
This paper studies (sometimes approximate) minimax-Bayes solutions for various reinforcement learning problems.
We find that while the worst-case prior depends on the setting, the corresponding minimax policies are more robust than those that assume a standard (i.e. uniform) prior.
arXiv Detail & Related papers (2023-02-21T17:10:21Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - A Nested Genetic Algorithm for Explaining Classification Data Sets with
Decision Rules [3.8271082752302137]
We aim to automatically extract a set of decision rules (rule set) that best explains a classification data set.
First, a large set of decision rules is extracted from a set of decision trees trained on the data set.
The rule set should be concise, accurate, have a maximum coverage and minimum number of inconsistencies.
arXiv Detail & Related papers (2022-08-23T11:42:15Z) - Calibrating Predictions to Decisions: A Novel Approach to Multi-Class
Calibration [118.26862029820447]
We introduce a new notion -- emphdecision calibration -- that requires the predicted distribution and true distribution to be indistinguishable'' to a set of downstream decision-makers.
Decision calibration improves decision-making on skin lesions and ImageNet classification with modern neural network.
arXiv Detail & Related papers (2021-07-12T20:17:28Z) - Modularity in Reinforcement Learning via Algorithmic Independence in
Credit Assignment [79.5678820246642]
We show that certain action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.
We generalize the recently proposed societal decision-making framework as a more granular formalism than the Markov decision process.
arXiv Detail & Related papers (2021-06-28T21:29:13Z) - A Scalable Two Stage Approach to Computing Optimal Decision Sets [29.946141040012545]
Rule-based models, such as decision trees, decision lists, and decision sets, are conventionally deemed to be the most interpretable.
Recent work uses propositional satisfiability (SAT) solving to generate minimum-size decision sets.
This paper proposes a novel approach to learn minimum-size decision sets by enumerating individual rules of the target decision set independently of each other, and then solving a set cover problem to select a subset of rules.
arXiv Detail & Related papers (2021-02-03T06:51:49Z) - Optimal Decision Lists using SAT [39.32513817522866]
We show for the first time how to construct optimal "perfect" decision lists which are perfectly accurate on the training data.
We also give a new method for determining optimal sparse decision lists, which trade off size and accuracy.
arXiv Detail & Related papers (2020-10-19T23:33:42Z) - Diverse Rule Sets [20.170305081348328]
Rule-based systems are experiencing a renaissance owing to their intuitive if-then representation.
We propose a novel approach of inferring diverse rule sets, by optimizing small overlap among decision rules.
We then devise an efficient randomized algorithm, which samples rules that are highly discriminative and have small overlap.
arXiv Detail & Related papers (2020-06-17T14:15:25Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.