Computing Abductive Explanations for Boosted Trees
- URL: http://arxiv.org/abs/2209.07740v1
- Date: Fri, 16 Sep 2022 06:53:42 GMT
- Title: Computing Abductive Explanations for Boosted Trees
- Authors: Gilles Audemard, Jean-Marie Lagniez, Pierre Marquis, Nicolas
Szczepanski
- Abstract summary: We introduce the notion of tree-specific explanation for a boosted tree.
We show that tree-specific explanations are abductive explanations that can be computed in time.
We also explain how to derive a subset-minimal abductive explanation from a tree-specific explanation.
- Score: 22.349433202401354
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Boosted trees is a dominant ML model, exhibiting high accuracy. However,
boosted trees are hardly intelligible, and this is a problem whenever they are
used in safety-critical applications. Indeed, in such a context, rigorous
explanations of the predictions made are expected. Recent work have shown how
subset-minimal abductive explanations can be derived for boosted trees, using
automated reasoning techniques. However, the generation of such well-founded
explanations is intractable in the general case. To improve the scalability of
their generation, we introduce the notion of tree-specific explanation for a
boosted tree. We show that tree-specific explanations are abductive
explanations that can be computed in polynomial time. We also explain how to
derive a subset-minimal abductive explanation from a tree-specific explanation.
Experiments on various datasets show the computational benefits of leveraging
tree-specific explanations for deriving subset-minimal abductive explanations.
Related papers
- Why do Random Forests Work? Understanding Tree Ensembles as
Self-Regularizing Adaptive Smoothers [68.76846801719095]
We argue that the current high-level dichotomy into bias- and variance-reduction prevalent in statistics is insufficient to understand tree ensembles.
We show that forests can improve upon trees by three distinct mechanisms that are usually implicitly entangled.
arXiv Detail & Related papers (2024-02-02T15:36:43Z) - Verifying Relational Explanations: A Probabilistic Approach [2.113770213797994]
We develop an approach where we assess the uncertainty in explanations generated by GNNExplainer.
We learn a factor graph model to quantify uncertainty in an explanation.
Our results on several datasets show that our approach can help verify explanations from GNNExplainer.
arXiv Detail & Related papers (2024-01-05T08:14:51Z) - Probabilistic Tree-of-thought Reasoning for Answering
Knowledge-intensive Complex Questions [93.40614719648386]
Large language models (LLMs) are capable of answering knowledge-intensive complex questions with chain-of-thought (CoT) reasoning.
Recent works turn to retrieving external knowledge to augment CoT reasoning.
We propose a novel approach: Probabilistic Tree-of-thought Reasoning (ProbTree)
arXiv Detail & Related papers (2023-11-23T12:52:37Z) - Improving the Validity of Decision Trees as Explanations [2.457872341625575]
We train a shallow tree with the objective of minimizing the maximum misclassification error across all leaf nodes.
The overall statistical performance of the shallow tree can become comparable to state-of-the-art methods.
arXiv Detail & Related papers (2023-06-11T21:14:29Z) - Conceptual Views on Tree Ensemble Classifiers [0.0]
Random Forests and related tree-based methods are popular for supervised learning from table based data.
apart from their ease of parallelization, their classification performance is also superior.
Statistical methods are often used to compensate for this disadvantage. Yet, their ability for local explanations, and in particular for global explanations, is limited.
arXiv Detail & Related papers (2023-02-10T14:33:21Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - On Tackling Explanation Redundancy in Decision Trees [19.833126971063724]
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
This paper offers both theoretical and experimental arguments demonstrating that, as long as interpretability of decision trees equates with succinctness of explanations, then decision trees ought not to be deemed interpretable.
arXiv Detail & Related papers (2022-05-20T05:33:38Z) - Explaining Answers with Entailment Trees [16.555369850015055]
We aim to explain answers by showing how evidence leads to the answer in a systematic way.
Our approach is to generate explanations in the form of entailment trees, namely a tree of entailment steps from facts that are known, through intermediate conclusions, to the final answer.
To train a model with this skill, we created ENTAILMENTBANK, the first dataset to contain multistep entailment trees.
arXiv Detail & Related papers (2021-04-17T23:13:56Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - Towards Interpretable Natural Language Understanding with Explanations
as Latent Variables [146.83882632854485]
We develop a framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.
Our framework treats natural language explanations as latent variables that model the underlying reasoning process of a neural model.
arXiv Detail & Related papers (2020-10-24T02:05:56Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.