On the Explanatory Power of Decision Trees
- URL: http://arxiv.org/abs/2108.05266v1
- Date: Wed, 11 Aug 2021 15:08:11 GMT
- Title: On the Explanatory Power of Decision Trees
- Authors: Gilles Audemard and Steve Bellart and Louenas Bounia and Fr\'ed\'eric
Koriche and Jean-Marie Lagniez and Pierre Marquis
- Abstract summary: We show that the set of all sufficient reasons of minimal size for instance given a decision tree can be exponentially larger than the size of the input.
We show how the explanatory importance of a feature and the number of sufficient reasons can be obtained via a model counting operation.
Unlike sufficient reasons, the set of all contrastive explanations for an instance given a decision tree can be derived, minimized and counted in time.
- Score: 20.87501058448681
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision trees have long been recognized as models of choice in sensitive
applications where interpretability is of paramount importance. In this paper,
we examine the computational ability of Boolean decision trees in deriving,
minimizing, and counting sufficient reasons and contrastive explanations. We
prove that the set of all sufficient reasons of minimal size for an instance
given a decision tree can be exponentially larger than the size of the input
(the instance and the decision tree). Therefore, generating the full set of
sufficient reasons can be out of reach. In addition, computing a single
sufficient reason does not prove enough in general; indeed, two sufficient
reasons for the same instance may differ on many features. To deal with this
issue and generate synthetic views of the set of all sufficient reasons, we
introduce the notions of relevant features and of necessary features that
characterize the (possibly negated) features appearing in at least one or in
every sufficient reason, and we show that they can be computed in polynomial
time. We also introduce the notion of explanatory importance, that indicates
how frequent each (possibly negated) feature is in the set of all sufficient
reasons. We show how the explanatory importance of a feature and the number of
sufficient reasons can be obtained via a model counting operation, which turns
out to be practical in many cases. We also explain how to enumerate sufficient
reasons of minimal size. We finally show that, unlike sufficient reasons, the
set of all contrastive explanations for an instance given a decision tree can
be derived, minimized and counted in polynomial time.
Related papers
- P-FOLIO: Evaluating and Improving Logical Reasoning with Abundant Human-Written Reasoning Chains [97.25943550933829]
We present P-FOLIO, a human-annotated dataset consisting of diverse and complex reasoning chains.
We use P-FOLIO to evaluate and improve large-language-model (LLM) reasoning capabilities.
arXiv Detail & Related papers (2024-10-11T19:22:57Z) - Logic for Explainable AI [11.358487655918676]
A central quest in explainable AI relates to understanding the decisions made by (learned) classifiers.
We discuss in this tutorial a comprehensive, semantical and computational theory of explainability along these dimensions.
arXiv Detail & Related papers (2023-05-09T04:53:57Z) - A New Class of Explanations for Classifiers with Non-Binary Features [11.358487655918676]
Two types of explanations have been receiving increased attention in the literature when analyzing the decisions made by classifiers.
We show that these explanations can be significantly improved in the presence of non-binary features.
Necessary and sufficient reasons were also shown to be the prime implicates and implicants of the complete reason for a decision.
arXiv Detail & Related papers (2023-04-28T11:05:46Z) - Inapproximability of sufficient reasons for decision trees [91.75627458356654]
In this note, we establish the hardness of approximation of the problem of computing the minimal size of a $delta$-sufficient reason for decision trees.
arXiv Detail & Related papers (2023-04-05T23:02:03Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - On Tackling Explanation Redundancy in Decision Trees [19.833126971063724]
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
This paper offers both theoretical and experimental arguments demonstrating that, as long as interpretability of decision trees equates with succinctness of explanations, then decision trees ought not to be deemed interpretable.
arXiv Detail & Related papers (2022-05-20T05:33:38Z) - On the Computation of Necessary and Sufficient Explanations [11.358487655918676]
The complete reason behind a decision is a Boolean formula that characterizes why the decision was made.
In this paper, we refer to the prime implicants of a complete reason as necessary reasons for the decision.
We provide an algorithm which can enumerate their shortest necessary reasons in output time.
arXiv Detail & Related papers (2022-03-20T04:39:41Z) - Trading Complexity for Sparsity in Random Forest Explanations [20.87501058448681]
We introduce majoritary reasons which are prime implicants of a strict majority of decision trees.
Experiments conducted on various datasets reveal the existence of a trade-off between runtime complexity and sparsity.
arXiv Detail & Related papers (2021-08-11T15:19:46Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.