Explaining Random Forests using Bipolar Argumentation and Markov
Networks (Technical Report)
- URL: http://arxiv.org/abs/2211.11699v1
- Date: Mon, 21 Nov 2022 18:20:50 GMT
- Title: Explaining Random Forests using Bipolar Argumentation and Markov
Networks (Technical Report)
- Authors: Nico Potyka, Xiang Yin, Francesca Toni
- Abstract summary: Random forests are decision tree ensembles that can be used to solve a variety of machine learning problems.
In order to reason about the decision process, we propose representing it as an argumentation problem.
We generalize sufficient and necessary argumentative explanations using a Markov network encoding, discuss the relevance of these explanations and establish relationships to families of abductive explanations from the literature.
- Score: 17.9926469947157
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Random forests are decision tree ensembles that can be used to solve a
variety of machine learning problems. However, as the number of trees and their
individual size can be large, their decision making process is often
incomprehensible. In order to reason about the decision process, we propose
representing it as an argumentation problem. We generalize sufficient and
necessary argumentative explanations using a Markov network encoding, discuss
the relevance of these explanations and establish relationships to families of
abductive explanations from the literature. As the complexity of the
explanation problems is high, we discuss a probabilistic approximation
algorithm and present first experimental results.
Related papers
- Explaining Bayesian Networks in Natural Language using Factor Arguments. Evaluation in the medical domain [5.999262679775618]
We introduce the notion of factor argument independence to address the question of defining when arguments should be presented jointly or separately.
We present an algorithm that produces a list of all independent factor arguments ordered by their strength.
Our proposal has been validated in the medical domain through a human-driven evaluation study.
arXiv Detail & Related papers (2024-10-23T17:33:27Z) - Probabilistic Tree-of-thought Reasoning for Answering
Knowledge-intensive Complex Questions [93.40614719648386]
Large language models (LLMs) are capable of answering knowledge-intensive complex questions with chain-of-thought (CoT) reasoning.
Recent works turn to retrieving external knowledge to augment CoT reasoning.
We propose a novel approach: Probabilistic Tree-of-thought Reasoning (ProbTree)
arXiv Detail & Related papers (2023-11-23T12:52:37Z) - Invariant Causal Set Covering Machines [64.86459157191346]
Rule-based models, such as decision trees, appeal to practitioners due to their interpretable nature.
However, the learning algorithms that produce such models are often vulnerable to spurious associations and thus, they are not guaranteed to extract causally-relevant insights.
We propose Invariant Causal Set Covering Machines, an extension of the classical Set Covering Machine algorithm for conjunctions/disjunctions of binary-valued rules that provably avoids spurious associations.
arXiv Detail & Related papers (2023-06-07T20:52:01Z) - Logic for Explainable AI [11.358487655918676]
A central quest in explainable AI relates to understanding the decisions made by (learned) classifiers.
We discuss in this tutorial a comprehensive, semantical and computational theory of explainability along these dimensions.
arXiv Detail & Related papers (2023-05-09T04:53:57Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - On Tackling Explanation Redundancy in Decision Trees [19.833126971063724]
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
This paper offers both theoretical and experimental arguments demonstrating that, as long as interpretability of decision trees equates with succinctness of explanations, then decision trees ought not to be deemed interpretable.
arXiv Detail & Related papers (2022-05-20T05:33:38Z) - Logical Credal Networks [87.25387518070411]
This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability.
We investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud.
arXiv Detail & Related papers (2021-09-25T00:00:47Z) - Trading Complexity for Sparsity in Random Forest Explanations [20.87501058448681]
We introduce majoritary reasons which are prime implicants of a strict majority of decision trees.
Experiments conducted on various datasets reveal the existence of a trade-off between runtime complexity and sparsity.
arXiv Detail & Related papers (2021-08-11T15:19:46Z) - Discrete Reasoning Templates for Natural Language Understanding [79.07883990966077]
We present an approach that reasons about complex questions by decomposing them to simpler subquestions.
We derive the final answer according to instructions in a predefined reasoning template.
We show that our approach is competitive with the state-of-the-art while being interpretable and requires little supervision.
arXiv Detail & Related papers (2021-04-05T18:56:56Z) - Algorithms for Causal Reasoning in Probability Trees [13.572630988699572]
We present concrete algorithms for causal reasoning in discrete probability trees.
Our work expands the domain of causal reasoning to a very general class of discrete processes.
arXiv Detail & Related papers (2020-10-23T08:51:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.