Improving the Validity of Decision Trees as Explanations
- URL: http://arxiv.org/abs/2306.06777v5
- Date: Tue, 4 Jun 2024 10:09:10 GMT
- Title: Improving the Validity of Decision Trees as Explanations
- Authors: Jiri Nemecek, Tomas Pevny, Jakub Marecek,
- Abstract summary: We train a shallow tree with the objective of minimizing the maximum misclassification error across all leaf nodes.
The overall statistical performance of the shallow tree can become comparable to state-of-the-art methods.
- Score: 2.457872341625575
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In classification and forecasting with tabular data, one often utilizes tree-based models. Those can be competitive with deep neural networks on tabular data and, under some conditions, explainable. The explainability depends on the depth of the tree and the accuracy in each leaf of the tree. We point out that decision trees containing leaves with unbalanced accuracy can provide misleading explanations. Low-accuracy leaves give less valid explanations, which could be interpreted as unfairness among subgroups utilizing these explanations. Here, we train a shallow tree with the objective of minimizing the maximum misclassification error across all leaf nodes. The shallow tree provides a global explanation, while the overall statistical performance of the shallow tree can become comparable to state-of-the-art methods (e.g., well-tuned XGBoost) by extending the leaves with further models.
Related papers
- Decision Trees for Interpretable Clusters in Mixture Models and Deep Representations [5.65604054654671]
We introduce the notion of an explainability-to-noise ratio for mixture models.
We propose an algorithm that takes as input a mixture model and constructs a suitable tree in data-independent time.
We prove upper and lower bounds on the error rate of the resulting decision tree.
arXiv Detail & Related papers (2024-11-03T14:00:20Z) - Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - Why do Random Forests Work? Understanding Tree Ensembles as
Self-Regularizing Adaptive Smoothers [68.76846801719095]
We argue that the current high-level dichotomy into bias- and variance-reduction prevalent in statistics is insufficient to understand tree ensembles.
We show that forests can improve upon trees by three distinct mechanisms that are usually implicitly entangled.
arXiv Detail & Related papers (2024-02-02T15:36:43Z) - Probabilistic Tree-of-thought Reasoning for Answering
Knowledge-intensive Complex Questions [93.40614719648386]
Large language models (LLMs) are capable of answering knowledge-intensive complex questions with chain-of-thought (CoT) reasoning.
Recent works turn to retrieving external knowledge to augment CoT reasoning.
We propose a novel approach: Probabilistic Tree-of-thought Reasoning (ProbTree)
arXiv Detail & Related papers (2023-11-23T12:52:37Z) - Conceptual Views on Tree Ensemble Classifiers [0.0]
Random Forests and related tree-based methods are popular for supervised learning from table based data.
apart from their ease of parallelization, their classification performance is also superior.
Statistical methods are often used to compensate for this disadvantage. Yet, their ability for local explanations, and in particular for global explanations, is limited.
arXiv Detail & Related papers (2023-02-10T14:33:21Z) - Computing Abductive Explanations for Boosted Trees [22.349433202401354]
We introduce the notion of tree-specific explanation for a boosted tree.
We show that tree-specific explanations are abductive explanations that can be computed in time.
We also explain how to derive a subset-minimal abductive explanation from a tree-specific explanation.
arXiv Detail & Related papers (2022-09-16T06:53:42Z) - Active-LATHE: An Active Learning Algorithm for Boosting the Error
Exponent for Learning Homogeneous Ising Trees [75.93186954061943]
We design and analyze an algorithm that boosts the error exponent by at least 40% when $rho$ is at least $0.8$.
Our analysis hinges on judiciously exploiting the minute but detectable statistical variation of the samples to allocate more data to parts of the graph.
arXiv Detail & Related papers (2021-10-27T10:45:21Z) - Tree in Tree: from Decision Trees to Decision Graphs [2.2336243882030025]
Tree in Tree decision graph (TnT) is a framework that extends the conventional decision tree to a more generic and powerful directed acyclic graph.
Our proposed model is a novel, more efficient, and accurate alternative to the widely-used decision trees.
arXiv Detail & Related papers (2021-10-01T13:20:05Z) - Visualizing hierarchies in scRNA-seq data using a density tree-biased
autoencoder [50.591267188664666]
We propose an approach for identifying a meaningful tree structure from high-dimensional scRNA-seq data.
We then introduce DTAE, a tree-biased autoencoder that emphasizes the tree structure of the data in low dimensional space.
arXiv Detail & Related papers (2021-02-11T08:48:48Z) - Robust estimation of tree structured models [0.0]
We show that it is possible to recover trees from noisy binary data up to a small equivalence class of possible trees.
We also provide a characterisation of when the Chow-Liu algorithm consistently learns the underlying tree from the noisy data.
arXiv Detail & Related papers (2021-02-10T14:58:40Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.