Reason to explain: Interactive contrastive explanations (REASONX)
- URL: http://arxiv.org/abs/2305.18143v1
- Date: Mon, 29 May 2023 15:13:46 GMT
- Title: Reason to explain: Interactive contrastive explanations (REASONX)
- Authors: Laura State, Salvatore Ruggieri and Franco Turini
- Abstract summary: We present REASONX, an explanation tool based on Constraint Logic Programming (CLP)
REASONX provides interactive contrastive explanations that can be augmented by background knowledge.
It computes factual and constrative decision rules, as well as closest constrative examples.
- Score: 5.156484100374058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many high-performing machine learning models are not interpretable. As they
are increasingly used in decision scenarios that can critically affect
individuals, it is necessary to develop tools to better understand their
outputs. Popular explanation methods include contrastive explanations. However,
they suffer several shortcomings, among others an insufficient incorporation of
background knowledge, and a lack of interactivity. While (dialogue-like)
interactivity is important to better communicate an explanation, background
knowledge has the potential to significantly improve their quality, e.g., by
adapting the explanation to the needs of the end-user. To close this gap, we
present REASONX, an explanation tool based on Constraint Logic Programming
(CLP). REASONX provides interactive contrastive explanations that can be
augmented by background knowledge, and allows to operate under a setting of
under-specified information, leading to increased flexibility in the provided
explanations. REASONX computes factual and constrative decision rules, as well
as closest constrative examples. It provides explanations for decision trees,
which can be the ML models under analysis, or global/local surrogate models of
any ML model. While the core part of REASONX is built on CLP, we also provide a
program layer that allows to compute the explanations via Python, making the
tool accessible to a wider audience. We illustrate the capability of REASONX on
a synthetic data set, and on a a well-developed example in the credit domain.
In both cases, we can show how REASONX can be flexibly used and tailored to the
needs of the user.
Related papers
- Incremental XAI: Memorable Understanding of AI with Incremental Explanations [13.460427339680168]
We propose to provide more detailed explanations by leveraging the human cognitive capacity to accumulate knowledge by incrementally receiving more details.
We introduce Incremental XAI to automatically partition explanations for general and atypical instances.
Memorability is improved by reusing base factors and reducing the number of factors shown in atypical cases.
arXiv Detail & Related papers (2024-04-10T04:38:17Z) - Pyreal: A Framework for Interpretable ML Explanations [51.14710806705126]
Pyreal is a system for generating a variety of interpretable machine learning explanations.
Pyreal converts data and explanations between the feature spaces expected by the model, relevant explanation algorithms, and human users.
Our studies demonstrate that Pyreal generates more useful explanations than existing systems.
arXiv Detail & Related papers (2023-12-20T15:04:52Z) - XplainLLM: A QA Explanation Dataset for Understanding LLM
Decision-Making [13.928951741632815]
Large Language Models (LLMs) have recently made impressive strides in natural language understanding tasks.
In this paper, we look into bringing some transparency to this process by introducing a new explanation dataset.
Our dataset includes 12,102 question-answer-explanation (QAE) triples.
arXiv Detail & Related papers (2023-11-15T00:34:28Z) - Declarative Reasoning on Explanations Using Constraint Logic Programming [12.039469573641217]
REASONX is an explanation method based on Constraint Logic Programming (CLP)
We present here the architecture of REASONX, which consists of a Python layer, closer to the user, and a CLP layer.
REASONX's core execution engine is a Prolog meta-program with declarative semantics in terms of logic theories.
arXiv Detail & Related papers (2023-09-01T12:31:39Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Robust Ante-hoc Graph Explainer using Bilevel Optimization [0.7999703756441758]
We propose RAGE, a novel and flexible ante-hoc explainer for graph neural networks.
RAGE can effectively identify molecular substructures that contain the full information needed for prediction.
Our experiments on various molecular classification tasks show that RAGE explanations are better than existing post-hoc and ante-hoc approaches.
arXiv Detail & Related papers (2023-05-25T05:50:38Z) - Explanation as a process: user-centric construction of multi-level and
multi-modal explanations [0.34410212782758043]
We present a process-based approach that combines multi-level and multi-modal explanations.
We use Inductive Logic Programming, an interpretable machine learning approach, to learn a comprehensible model.
arXiv Detail & Related papers (2021-10-07T19:26:21Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial
Explanations of Their Behavior in Natural Language? [86.60613602337246]
We introduce a leakage-adjusted simulatability (LAS) metric for evaluating NL explanations.
LAS measures how well explanations help an observer predict a model's output, while controlling for how explanations can directly leak the output.
We frame explanation generation as a multi-agent game and optimize explanations for simulatability while penalizing label leakage.
arXiv Detail & Related papers (2020-10-08T16:59:07Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.