Explainability via Short Formulas: the Case of Propositional Logic with
Implementation
- URL: http://arxiv.org/abs/2209.01403v1
- Date: Sat, 3 Sep 2022 11:47:25 GMT
- Title: Explainability via Short Formulas: the Case of Propositional Logic with
Implementation
- Authors: Reijo Jaakkola, Tomi Janhunen, Antti Kuusisto, Masood Feyzbakhsh
Rankooh, Miikka Vilander
- Abstract summary: We give a number of related definitions of explainability in a very general setting.
Our main interest is the so-called special explanation problem which aims to explain the truth value of an input formula in an input model.
- Score: 2.583686260808494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We conceptualize explainability in terms of logic and formula size, giving a
number of related definitions of explainability in a very general setting. Our
main interest is the so-called special explanation problem which aims to
explain the truth value of an input formula in an input model. The explanation
is a formula of minimal size that (1) agrees with the input formula on the
input model and (2) transmits the involved truth value to the input formula
globally, i.e., on every model. As an important example case, we study
propositional logic in this setting and show that the special explainability
problem is complete for the second level of the polynomial hierarchy. We also
provide an implementation of this problem in answer set programming and
investigate its capacity in relation to explaining answers to the n-queens and
dominating set problems.
Related papers
- Complexity of Faceted Explanations in Propositional Abduction [6.674752821781092]
Abductive reasoning is a popular non-monotonic paradigm that aims to explain observed symptoms and manifestations.<n>In propositional abduction, we focus on specifying knowledge by a propositional formula.<n>We consider reasoning between decisions and counting, allowing us to understand explanations better.
arXiv Detail & Related papers (2025-07-20T13:50:26Z) - Why this and not that? A Logic-based Framework for Contrastive Explanations [4.3871352596331255]
We define several canonical problems related to contrastive explanations, each answering a question of the form ''Why P but not Q?''<n>The problems compute causes for both P and Q, explicitly comparing their differences.<n>We show, inter alia, that our framework captures a cardinality-minimal version of existing contrastive explanations in the literature.
arXiv Detail & Related papers (2025-07-11T09:55:04Z) - The Limits of AI Explainability: An Algorithmic Information Theory Approach [4.759142872591625]
This paper establishes a theoretical foundation for understanding the fundamental limits of AI explainability through algorithmic information theory.
We formalize explainability as the approximation of complex models by simpler ones, quantifying both approximation error and explanation using Kolmogorov.
Results highlight considerations likely to be relevant to the design, evaluation, and oversight of explainable AI systems.
arXiv Detail & Related papers (2025-04-29T11:58:37Z) - Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations [87.68633031231924]
Post-hoc explanation methods provide interpretation by attributing predictions to input features.
Do these explanations unintentionally reverse the natural relationship between inputs and outputs?
We propose Inversion Quantification (IQ), a framework that quantifies the degree to which explanations rely on outputs and deviate from faithful input-output relationships.
arXiv Detail & Related papers (2025-04-11T19:00:12Z) - Explaining Explanations in Probabilistic Logic Programming [0.0]
In most approaches, the system is considered a black box, making it difficult to generate appropriate explanations.
We consider a setting where models are transparent: probabilistic logic programming (PLP), a paradigm that combines logic programming for knowledge representation and probability to model uncertainty.
We present in this paper an approach to explaining explanations which is based on defining a new query-driven inference mechanism for PLP where proofs are labeled with "choice expressions", a compact and easy to manipulate representation for sets of choices.
arXiv Detail & Related papers (2024-01-30T14:27:37Z) - On Logic-Based Explainability with Partially Specified Inputs [1.7587442088965224]
Missing data is often addressed when training machine learning (ML) models.
But missing data also needs to be addressed when deciding predictions and when explaining those predictions.
This paper studies the computation of logic-based explanations in the presence of partially specified inputs.
arXiv Detail & Related papers (2023-06-27T21:09:25Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Reasoning over Logically Interacted Conditions for Question Answering [113.9231035680578]
We study a more challenging task where answers are constrained by a list of conditions that logically interact.
We propose a new model, TReasoner, for this challenging reasoning task.
TReasoner achieves state-of-the-art performance on two benchmark conditional QA datasets.
arXiv Detail & Related papers (2022-05-25T16:41:39Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Quantification and Aggregation over Concepts of the Ontology [0.0]
We argue that in some KR applications, we want to quantify over sets of concepts formally represented by symbols in the vocabulary.
We present an extension of first-order logic to support such abstractions, and show that it allows writing expressions of knowledge that are elaboration tolerant.
arXiv Detail & Related papers (2022-02-02T07:49:23Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - ExplanationLP: Abductive Reasoning for Explainable Science Question
Answering [4.726777092009554]
This paper frames question answering as an abductive reasoning problem.
We construct plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer.
Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer.
arXiv Detail & Related papers (2020-10-25T14:49:24Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - Foundations of Reasoning with Uncertainty via Real-valued Logics [70.43924776071616]
We give a sound and strongly complete axiomatization that can be parametrized to cover essentially every real-valued logic.
Our class of sentences are very rich, and each describes a set of possible real values for a collection of formulas of the real-valued logic.
arXiv Detail & Related papers (2020-08-06T02:13:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.