Axiomatic Foundations of Counterfactual Explanations
- URL: http://arxiv.org/abs/2602.04028v1
- Date: Tue, 03 Feb 2026 21:34:18 GMT
- Title: Axiomatic Foundations of Counterfactual Explanations
- Authors: Leila Amgoud, Martin Cooper,
- Abstract summary: Counterfactuals have emerged as one of the most compelling forms of explanation.<n>Most existing explainers focus on a single type of counterfactual and are restricted to local explanations.<n>This paper introduces an axiomatic framework built on a set of desirable properties for counterfactual explainers.
- Score: 5.285396202883409
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explaining autonomous and intelligent systems is critical in order to improve trust in their decisions. Counterfactuals have emerged as one of the most compelling forms of explanation. They address ``why not'' questions by revealing how decisions could be altered. Despite the growing literature, most existing explainers focus on a single type of counterfactual and are restricted to local explanations, focusing on individual instances. There has been no systematic study of alternative counterfactual types, nor of global counterfactuals that shed light on a system's overall reasoning process. This paper addresses the two gaps by introducing an axiomatic framework built on a set of desirable properties for counterfactual explainers. It proves impossibility theorems showing that no single explainer can satisfy certain axiom combinations simultaneously, and fully characterizes all compatible sets. Representation theorems then establish five one-to-one correspondences between specific subsets of axioms and the families of explainers that satisfy them. Each family gives rise to a distinct type of counterfactual explanation, uncovering five fundamentally different types of counterfactuals. Some of these correspond to local explanations, while others capture global explanations. Finally, the framework situates existing explainers within this taxonomy, formally characterizes their behavior, and analyzes the computational complexity of generating such explanations.
Related papers
- Reasoning is about giving reasons [55.56111618153049]
We show that we can identify and extract the logical structure of natural language arguments in three popular reasoning datasets with high accuracies.<n>Our approach supports all forms of reasoning that depend on the logical structure of the natural language argument.
arXiv Detail & Related papers (2025-08-20T07:26:53Z) - Complexity of Faceted Explanations in Propositional Abduction [6.674752821781092]
Abductive reasoning is a popular non-monotonic paradigm that aims to explain observed symptoms and manifestations.<n>In propositional abduction, we focus on specifying knowledge by a propositional formula.<n>We consider reasoning between decisions and counting, allowing us to understand explanations better.
arXiv Detail & Related papers (2025-07-20T13:50:26Z) - Ranking Counterfactual Explanations [7.066382982173528]
Explanations can address two key questions: "Why this outcome?" (factual) and "Why not another?" (counterfactual)<n>This paper proposes a formal definition of counterfactual explanations, proving some properties they satisfy, and examining the relationship with factual explanations.
arXiv Detail & Related papers (2025-03-20T03:04:05Z) - Axiomatic Characterisations of Sample-based Explainers [8.397730500554047]
We scrutinize explainers that generate feature-based explanations from samples or datasets.
We identify the entire family of explainers that satisfy two key properties which are compatible with all the others.
We introduce the first (broad family of) explainers that guarantee the existence of explanations and irrefutable global consistency.
arXiv Detail & Related papers (2024-08-09T07:10:07Z) - Formal Proofs as Structured Explanations: Proposing Several Tasks on Explainable Natural Language Inference [0.16317061277457]
We propose a reasoning framework that can model the reasoning process underlying natural language inferences.<n>The framework is based on the semantic tableau method, a well-studied proof system in formal logic.<n>We show how it can be used to define natural language reasoning tasks with structured explanations.
arXiv Detail & Related papers (2023-11-15T01:24:09Z) - HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale
Supervision [118.0818807474809]
This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision.
Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document.
arXiv Detail & Related papers (2023-05-23T16:53:49Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Zero-Shot Classification by Logical Reasoning on Natural Language
Explanations [56.42922904777717]
We propose the framework CLORE (Classification by LOgical Reasoning on Explanations)
CLORE parses explanations into logical structures and then explicitly reasons along thess structures on the input to produce a classification score.
We also demonstrate that our framework can be extended to zero-shot classification on visual modality.
arXiv Detail & Related papers (2022-11-07T01:05:11Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - Foundations of Reasoning with Uncertainty via Real-valued Logics [70.43924776071616]
We give a sound and strongly complete axiomatization that can be parametrized to cover essentially every real-valued logic.
Our class of sentences are very rich, and each describes a set of possible real values for a collection of formulas of the real-valued logic.
arXiv Detail & Related papers (2020-08-06T02:13:11Z) - Adequate and fair explanations [12.33259114006129]
We focus upon the second school of exact explanations with a rigorous logical foundation.
With counterfactual explanations, many of the assumptions needed to provide a complete explanation are left implicit.
We explore how to move from local partial explanations to what we call complete local explanations and then to global ones.
arXiv Detail & Related papers (2020-01-21T14:42:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.