Provably Precise, Succinct and Efficient Explanations for Decision Trees
- URL: http://arxiv.org/abs/2205.09569v1
- Date: Thu, 19 May 2022 13:54:52 GMT
- Title: Provably Precise, Succinct and Efficient Explanations for Decision Trees
- Authors: Yacine Izza, Alexey Ignatiev, Nina Narodytska, Martin C. Cooper and
Joao Marques-Silva
- Abstract summary: Decision trees (DTs) embody interpretable classifiers.
Work has demonstrated that predictions in DTs ought to be explained with rigorous explanations.
delta-relevant sets denote that are succinct and provably precise.
- Score: 32.062312674333775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision trees (DTs) embody interpretable classifiers. DTs have been
advocated for deployment in high-risk applications, but also for explaining
other complex classifiers. Nevertheless, recent work has demonstrated that
predictions in DTs ought to be explained with rigorous approaches. Although
rigorous explanations can be computed in polynomial time for DTs, their size
may be beyond the cognitive limits of human decision makers. This paper
investigates the computation of {\delta}-relevant sets for DTs.
{\delta}-relevant sets denote explanations that are succinct and provably
precise. These sets represent generalizations of rigorous explanations, which
are precise with probability one, and so they enable trading off explanation
size for precision. The paper proposes two logic encodings for computing
smallest {\delta}-relevant sets for DTs. The paper further devises a
polynomial-time algorithm for computing {\delta}-relevant sets which are not
guaranteed to be subset-minimal, but for which the experiments show to be most
often subset-minimal in practice. The experimental results also demonstrate the
practical efficiency of computing smallest {\delta}-relevant sets.
Related papers
- Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Interpretability at Scale: Identifying Causal Mechanisms in Alpaca [62.65877150123775]
We use Boundless DAS to efficiently search for interpretable causal structure in large language models while they follow instructions.
Our findings mark a first step toward faithfully understanding the inner-workings of our ever-growing and most widely deployed language models.
arXiv Detail & Related papers (2023-05-15T17:15:40Z) - On Computing Probabilistic Abductive Explanations [30.325691263226968]
The most widely studied explainable AI (XAI) approaches are unsound.
PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size.
This paper investigates practical approaches for computing relevant sets for a number of widely used classifiers.
arXiv Detail & Related papers (2022-12-12T15:47:10Z) - On Tackling Explanation Redundancy in Decision Trees [19.833126971063724]
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
This paper offers both theoretical and experimental arguments demonstrating that, as long as interpretability of decision trees equates with succinctness of explanations, then decision trees ought not to be deemed interpretable.
arXiv Detail & Related papers (2022-05-20T05:33:38Z) - Rationales for Sequential Predictions [117.93025782838123]
Sequence models are a critical component of modern NLP systems, but their predictions are difficult to explain.
We consider model explanations though rationales, subsets of context that can explain individual model predictions.
We propose an efficient greedy algorithm to approximate this objective.
arXiv Detail & Related papers (2021-09-14T01:25:15Z) - On Efficiently Explaining Graph-Based Classifiers [16.199563506727316]
This paper shows that not only decision trees (DTs) may not be interpretable but also proposed a-time algorithm for computing one PI-explanation of a DT.
In addition, the paper also proposes a-time algorithm for computing one contrastive explanation.
arXiv Detail & Related papers (2021-06-02T17:55:41Z) - Efficient Explanations With Relevant Sets [30.296628060841645]
This paper investigates solutions for tackling the practical limitations of $delta$-relevant sets.
The computation of the subset of $delta$-relevant sets is in NP, and can be solved with a number of calls to an NP oracle.
arXiv Detail & Related papers (2021-06-01T14:57:58Z) - On Explaining Decision Trees [17.646704122091087]
Decision trees (DTs) epitomize what have become to be known as interpretable machine learning (ML) models.
This paper shows that in some settings DTs can hardly be deemed interpretable, with paths in a DT being arbitrarily larger than a PI-explanation.
arXiv Detail & Related papers (2020-10-21T14:33:53Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z) - Information-Theoretic Probing with Minimum Description Length [74.29846942213445]
We propose an alternative to the standard probes, information-theoretic probing with minimum description length (MDL)
With MDL probing, training a probe to predict labels is recast as teaching it to effectively transmit the data.
We show that these methods agree in results and are more informative and stable than the standard probes.
arXiv Detail & Related papers (2020-03-27T09:35:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.