On the Computation of Necessary and Sufficient Explanations
- URL: http://arxiv.org/abs/2203.10451v1
- Date: Sun, 20 Mar 2022 04:39:41 GMT
- Title: On the Computation of Necessary and Sufficient Explanations
- Authors: Adnan Darwiche and Chunxi Ji
- Abstract summary: The complete reason behind a decision is a Boolean formula that characterizes why the decision was made.
In this paper, we refer to the prime implicants of a complete reason as necessary reasons for the decision.
We provide an algorithm which can enumerate their shortest necessary reasons in output time.
- Score: 11.358487655918676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The complete reason behind a decision is a Boolean formula that characterizes
why the decision was made. This recently introduced notion has a number of
applications, which include generating explanations, detecting decision bias
and evaluating counterfactual queries. Prime implicants of the complete reason
are known as sufficient reasons for the decision and they correspond to what is
known as PI explanations and abductive explanations. In this paper, we refer to
the prime implicates of a complete reason as necessary reasons for the
decision. We justify this terminology semantically and show that necessary
reasons correspond to what is known as contrastive explanations. We also study
the computation of complete reasons for multi-class decision trees and graphs
with nominal and numeric features for which we derive efficient, closed-form
complete reasons. We further investigate the computation of shortest necessary
and sufficient reasons for a broad class of complete reasons, which include the
derived closed forms and the complete reasons for Sentential Decision Diagrams
(SDDs). We provide an algorithm which can enumerate their shortest necessary
reasons in output polynomial time. Enumerating shortest sufficient reasons for
this class of complete reasons is hard even for a single reason. For this
problem, we provide an algorithm that appears to be quite efficient as we show
empirically.
Related papers
- Chain-of-Probe: Examing the Necessity and Accuracy of CoT Step-by-Step [81.50681925980135]
We propose a method to probe changes in the mind during the model's reasoning.
By analyzing patterns in mind change, we examine the correctness of the model's reasoning.
Our validation reveals that many responses, although correct in their final answer, contain errors in their reasoning process.
arXiv Detail & Related papers (2024-06-23T15:50:22Z) - A New Class of Explanations for Classifiers with Non-Binary Features [11.358487655918676]
Two types of explanations have been receiving increased attention in the literature when analyzing the decisions made by classifiers.
We show that these explanations can be significantly improved in the presence of non-binary features.
Necessary and sufficient reasons were also shown to be the prime implicates and implicants of the complete reason for a decision.
arXiv Detail & Related papers (2023-04-28T11:05:46Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Reasoning over Logically Interacted Conditions for Question Answering [113.9231035680578]
We study a more challenging task where answers are constrained by a list of conditions that logically interact.
We propose a new model, TReasoner, for this challenging reasoning task.
TReasoner achieves state-of-the-art performance on two benchmark conditional QA datasets.
arXiv Detail & Related papers (2022-05-25T16:41:39Z) - Trading Complexity for Sparsity in Random Forest Explanations [20.87501058448681]
We introduce majoritary reasons which are prime implicants of a strict majority of decision trees.
Experiments conducted on various datasets reveal the existence of a trade-off between runtime complexity and sparsity.
arXiv Detail & Related papers (2021-08-11T15:19:46Z) - On the Explanatory Power of Decision Trees [20.87501058448681]
We show that the set of all sufficient reasons of minimal size for instance given a decision tree can be exponentially larger than the size of the input.
We show how the explanatory importance of a feature and the number of sufficient reasons can be obtained via a model counting operation.
Unlike sufficient reasons, the set of all contrastive explanations for an instance given a decision tree can be derived, minimized and counted in time.
arXiv Detail & Related papers (2021-08-11T15:08:11Z) - Search Methods for Sufficient, Socially-Aligned Feature Importance
Explanations with In-Distribution Counterfactuals [72.00815192668193]
Feature importance (FI) estimates are a popular form of explanation, and they are commonly created and evaluated by computing the change in model confidence caused by removing certain input features at test time.
We study several under-explored dimensions of FI-based explanations, providing conceptual and empirical improvements for this form of explanation.
arXiv Detail & Related papers (2021-06-01T20:36:48Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - ExplanationLP: Abductive Reasoning for Explainable Science Question
Answering [4.726777092009554]
This paper frames question answering as an abductive reasoning problem.
We construct plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer.
Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer.
arXiv Detail & Related papers (2020-10-25T14:49:24Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.