ReasonX: Declarative Reasoning on Explanations
- URL: http://arxiv.org/abs/2602.23810v1
- Date: Fri, 27 Feb 2026 08:50:02 GMT
- Title: ReasonX: Declarative Reasoning on Explanations
- Authors: Laura State, Salvatore Ruggieri, Franco Turini,
- Abstract summary: ReasonX is an explanation tool based on expressions (or, queries) in a closed algebra of operators over theories of linear constraints.<n>Users can express background or common sense knowledge as linear constraints.
- Score: 7.744846780771137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explaining opaque Machine Learning (ML) models has become an increasingly important challenge. However, current eXplanation in AI (XAI) methods suffer several shortcomings, including insufficient abstraction, limited user interactivity, and inadequate integration of symbolic knowledge. We propose ReasonX, an explanation tool based on expressions (or, queries) in a closed algebra of operators over theories of linear constraints. ReasonX provides declarative and interactive explanations for decision trees, which may represent the ML models under analysis or serve as global or local surrogate models for any black-box predictor. Users can express background or common sense knowledge as linear constraints. This allows for reasoning at multiple levels of abstraction, ranging from fully specified examples to under-specified or partially constrained ones. ReasonX leverages Mixed-Integer Linear Programming (MILP) to reason over the features of factual and contrastive instances. We present here the architecture of ReasonX, which consists of a Python layer, closer to the user, and a Constraint Logic Programming (CLP) layer, which implements a meta-interpreter of the query algebra. The capabilities of ReasonX are demonstrated through qualitative examples, and compared to other XAI tools through quantitative experiments.
Related papers
- OrLog: Resolving Complex Queries with LLMs and Probabilistic Reasoning [51.58235452818926]
We introduce OrLog, a neuro-symbolic retrieval framework that decouples predicate-level plausibility estimation from logical reasoning.<n>A large language model (LLM) provides plausibility scores for atomic predicates in one decoding-free forward pass, from which a probabilistic reasoning engine derives the posterior probability of query satisfaction.
arXiv Detail & Related papers (2026-01-30T15:31:58Z) - ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning [92.76959707441954]
We introduce ZebraLogic, a comprehensive evaluation framework for assessing LLM reasoning performance.<n>ZebraLogic enables the generation of puzzles with controllable and quantifiable complexity.<n>Our results reveal a significant decline in accuracy as problem complexity grows.
arXiv Detail & Related papers (2025-02-03T06:44:49Z) - Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features [19.15360328688008]
We propose a framework, called Symbolic XAI, that attributes relevance to symbolic queries expressing logical relationships between input features.
The framework provides an understanding of the model's decision-making process that is both flexible for customization by the user and human-readable.
arXiv Detail & Related papers (2024-08-30T10:52:18Z) - Why do explanations fail? A typology and discussion on failures in XAI [5.366368559381279]
We argue that the resulting harms arise from a complex overlap of multiple failures in XAI.<n>We propose a typological framework that helps revealing the nuanced complexities of explanation failures.
arXiv Detail & Related papers (2024-05-22T09:32:24Z) - Distance-Restricted Explanations: Theoretical Underpinnings & Efficient Implementation [19.22391463965126]
Some uses of machine learning (ML) involve high-stakes and safety-critical applications.<n>This paper investigates novel algorithms for scaling up the performance of logic-based explainers.
arXiv Detail & Related papers (2024-05-14T03:42:33Z) - Pyreal: A Framework for Interpretable ML Explanations [51.14710806705126]
Pyreal is a system for generating a variety of interpretable machine learning explanations.
Pyreal converts data and explanations between the feature spaces expected by the model, relevant explanation algorithms, and human users.
Our studies demonstrate that Pyreal generates more useful explanations than existing systems.
arXiv Detail & Related papers (2023-12-20T15:04:52Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - Declarative Reasoning on Explanations Using Constraint Logic Programming [12.039469573641217]
REASONX is an explanation method based on Constraint Logic Programming (CLP)
We present here the architecture of REASONX, which consists of a Python layer, closer to the user, and a CLP layer.
REASONX's core execution engine is a Prolog meta-program with declarative semantics in terms of logic theories.
arXiv Detail & Related papers (2023-09-01T12:31:39Z) - Theoretical Behavior of XAI Methods in the Presence of Suppressor
Variables [0.8602553195689513]
In recent years, the community of 'explainable artificial intelligence' (XAI) has created a vast body of methods to bridge a perceived gap between model 'complexity' and 'interpretability'
We show that the majority of the studied approaches will attribute non-zero importance to a non-class-related suppressor feature in the presence of correlated noise.
arXiv Detail & Related papers (2023-06-02T11:41:19Z) - Reason to explain: Interactive contrastive explanations (REASONX) [5.156484100374058]
We present REASONX, an explanation tool based on Constraint Logic Programming (CLP)
REASONX provides interactive contrastive explanations that can be augmented by background knowledge.
It computes factual and constrative decision rules, as well as closest constrative examples.
arXiv Detail & Related papers (2023-05-29T15:13:46Z) - OmniXAI: A Library for Explainable AI [98.07381528393245]
We introduce OmniXAI, an open-source Python library of eXplainable AI (XAI)
It offers omni-way explainable AI capabilities and various interpretable machine learning techniques.
For practitioners, the library provides an easy-to-use unified interface to generate the explanations for their applications.
arXiv Detail & Related papers (2022-06-01T11:35:37Z) - CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
Human Trust in Image Recognition Models [84.32751938563426]
We propose a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN)
In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process.
Our framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.
arXiv Detail & Related papers (2021-09-03T09:46:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.