Complexity of Faceted Explanations in Propositional Abduction
- URL: http://arxiv.org/abs/2507.14962v1
- Date: Sun, 20 Jul 2025 13:50:26 GMT
- Title: Complexity of Faceted Explanations in Propositional Abduction
- Authors: Johannes Schmidt, Mohamed Maizia, Victor Lagerkvist, Johannes K. Fichte,
- Abstract summary: Abductive reasoning is a popular non-monotonic paradigm that aims to explain observed symptoms and manifestations.<n>In propositional abduction, we focus on specifying knowledge by a propositional formula.<n>We consider reasoning between decisions and counting, allowing us to understand explanations better.
- Score: 6.674752821781092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Abductive reasoning is a popular non-monotonic paradigm that aims to explain observed symptoms and manifestations. It has many applications, such as diagnosis and planning in artificial intelligence and database updates. In propositional abduction, we focus on specifying knowledge by a propositional formula. The computational complexity of tasks in propositional abduction has been systematically characterized - even with detailed classifications for Boolean fragments. Unsurprisingly, the most insightful reasoning problems (counting and enumeration) are computationally highly challenging. Therefore, we consider reasoning between decisions and counting, allowing us to understand explanations better while maintaining favorable complexity. We introduce facets to propositional abductions, which are literals that occur in some explanation (relevant) but not all explanations (dispensable). Reasoning with facets provides a more fine-grained understanding of variability in explanations (heterogeneous). In addition, we consider the distance between two explanations, enabling a better understanding of heterogeneity/homogeneity. We comprehensively analyze facets of propositional abduction in various settings, including an almost complete characterization in Post's framework.
Related papers
- A Fine-Grained Complexity View on Propositional Abduction -- Algorithms and Lower Bounds [6.6362553223890535]
We analyze the complexity of intractable abduction problems under the seemingly overlooked but natural parameter n.<n>We obtain several positive results for $SigmaP$ as well as NP- and coNP-complete fragments.<n>We complement this with lower bounds and for many fragments rule out improvements under the (strong) exponential-time hypothesis.
arXiv Detail & Related papers (2025-05-15T11:56:19Z) - Ranking Counterfactual Explanations [7.066382982173528]
Explanations can address two key questions: "Why this outcome?" (factual) and "Why not another?" (counterfactual)<n>This paper proposes a formal definition of counterfactual explanations, proving some properties they satisfy, and examining the relationship with factual explanations.
arXiv Detail & Related papers (2025-03-20T03:04:05Z) - Dialogue-based Explanations for Logical Reasoning using Structured Argumentation [0.06138671548064355]
We identify structural weaknesses of the state-of-the-art and propose a generic argumentation-based approach to address these problems.<n>Our work provides dialogue models as dialectic-proof procedures to compute and explain a query answer.<n>This allows us to construct dialectical proof trees as explanations, which are more expressive and arguably more intuitive than existing explanation formalisms.
arXiv Detail & Related papers (2025-02-16T22:26:18Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Discrete Reasoning Templates for Natural Language Understanding [79.07883990966077]
We present an approach that reasons about complex questions by decomposing them to simpler subquestions.
We derive the final answer according to instructions in a predefined reasoning template.
We show that our approach is competitive with the state-of-the-art while being interpretable and requires little supervision.
arXiv Detail & Related papers (2021-04-05T18:56:56Z) - ExplanationLP: Abductive Reasoning for Explainable Science Question
Answering [4.726777092009554]
This paper frames question answering as an abductive reasoning problem.
We construct plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer.
Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer.
arXiv Detail & Related papers (2020-10-25T14:49:24Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - A framework for step-wise explaining how to solve constraint
satisfaction problems [21.96171133035504]
We study the problem of explaining the inference steps that one can take during propagation, in a way that is easy to interpret for a person.
Thereby, we aim to give the constraint solver explainable agency, which can help in building trust in the solver.
arXiv Detail & Related papers (2020-06-11T11:35:41Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.