Generating Context-Aware Contrastive Explanations in Rule-based Systems
- URL: http://arxiv.org/abs/2402.13000v1
- Date: Tue, 20 Feb 2024 13:31:12 GMT
- Title: Generating Context-Aware Contrastive Explanations in Rule-based Systems
- Authors: Lars Herbold, Mersedeh Sadeghi, Andreas Vogelsang
- Abstract summary: We present an approach that predicts a potential contrastive event in situations where a user asks for an explanation in the context of rule-based systems.
Our approach analyzes a situation that needs to be explained and then selects the most likely rule a user may have expected instead of what the user has observed.
This contrastive event is then used to create a contrastive explanation that is presented to the user.
- Score: 2.497044167437633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human explanations are often contrastive, meaning that they do not answer the
indeterminate "Why?" question, but instead "Why P, rather than Q?".
Automatically generating contrastive explanations is challenging because the
contrastive event (Q) represents the expectation of a user in contrast to what
happened. We present an approach that predicts a potential contrastive event in
situations where a user asks for an explanation in the context of rule-based
systems. Our approach analyzes a situation that needs to be explained and then
selects the most likely rule a user may have expected instead of what the user
has observed. This contrastive event is then used to create a contrastive
explanation that is presented to the user. We have implemented the approach as
a plugin for a home automation system and demonstrate its feasibility in four
test scenarios.
Related papers
- Explaining the (Not So) Obvious: Simple and Fast Explanation of STAN, a Next Point of Interest Recommendation System [0.5796859155047135]
Some machine learning methods are inherently explainable, and thus are not completely black box.
This enables the developers to make sense of the output without a developing a complex and expensive explainability technique.
We demonstrate this philosophy/paradigm in STAN, a next Point of Interest recommendation system based on collaborative filtering and sequence prediction.
arXiv Detail & Related papers (2024-10-04T18:14:58Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - On Interactive Explanations as Non-Monotonic Reasoning [10.616061367794385]
We treat explanations as objects that can be subject to reasoning.
We present a formal model of the interactive scenario between user and system.
This allows: 1) to solve some considered inconsistencies in explanation, such as via a specificity relation; 2) to consider properties from the non-monotonic reasoning literature and discuss their desirability.
arXiv Detail & Related papers (2022-07-30T22:08:35Z) - The Unreliability of Explanations in Few-Shot In-Context Learning [50.77996380021221]
We focus on two NLP tasks that involve reasoning over text, namely question answering and natural language inference.
We show that explanations judged as good by humans--those that are logically consistent with the input--usually indicate more accurate predictions.
We present a framework for calibrating model predictions based on the reliability of the explanations.
arXiv Detail & Related papers (2022-05-06T17:57:58Z) - Visual Abductive Reasoning [85.17040703205608]
Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
arXiv Detail & Related papers (2022-03-26T10:17:03Z) - Prompting Contrastive Explanations for Commonsense Reasoning Tasks [74.7346558082693]
Large pretrained language models (PLMs) can achieve near-human performance on commonsense reasoning tasks.
We show how to use these same models to generate human-interpretable evidence.
arXiv Detail & Related papers (2021-06-12T17:06:13Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA [22.76153284711981]
We study whether explanations help users correctly decide when to accept or reject an ODQA system's answer.
Our results show that explanations derived from retrieved evidence passages can outperform strong baselines (calibrated confidence) across modalities.
We show common failure cases of current explanations, emphasize end-to-end evaluation of explanations, and caution against evaluating them in proxy modalities that are different from deployment.
arXiv Detail & Related papers (2020-12-30T08:19:02Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Toward Machine-Guided, Human-Initiated Explanatory Interactive Learning [9.887110107270196]
Recent work has demonstrated the promise of combining local explanations with active learning for understanding and supervising black-box models.
Here we show that, under specific conditions, these algorithms may misrepresent the quality of the model being learned.
We address this narrative bias by introducing explanatory guided learning.
arXiv Detail & Related papers (2020-07-20T11:51:31Z) - Explanations of Black-Box Model Predictions by Contextual Importance and
Utility [1.7188280334580195]
We present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations easily understandable by experts as well as novice users.
This method explains the prediction results without transforming the model into an interpretable one.
We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation.
arXiv Detail & Related papers (2020-05-30T06:49:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.