Features of Explainability: How users understand counterfactual and
causal explanations for categorical and continuous features in XAI
- URL: http://arxiv.org/abs/2204.10152v1
- Date: Thu, 21 Apr 2022 15:01:09 GMT
- Title: Features of Explainability: How users understand counterfactual and
causal explanations for categorical and continuous features in XAI
- Authors: Greta Warren and Mark T Keane and Ruth M J Byrne
- Abstract summary: Counterfactual explanations are increasingly used to address interpretability, recourse, and bias in AI decisions.
We tested the effects of counterfactual and causal explanations on the objective accuracy of users predictions.
We also found that users understand explanations referring to categorical features more readily than those referring to continuous features.
- Score: 10.151828072611428
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Counterfactual explanations are increasingly used to address
interpretability, recourse, and bias in AI decisions. However, we do not know
how well counterfactual explanations help users to understand a systems
decisions, since no large scale user studies have compared their efficacy to
other sorts of explanations such as causal explanations (which have a longer
track record of use in rule based and decision tree models). It is also unknown
whether counterfactual explanations are equally effective for categorical as
for continuous features, although current methods assume they do. Hence, in a
controlled user study with 127 volunteer participants, we tested the effects of
counterfactual and causal explanations on the objective accuracy of users
predictions of the decisions made by a simple AI system, and participants
subjective judgments of satisfaction and trust in the explanations. We
discovered a dissociation between objective and subjective measures:
counterfactual explanations elicit higher accuracy of predictions than
no-explanation control descriptions but no higher accuracy than causal
explanations, yet counterfactual explanations elicit greater satisfaction and
trust than causal explanations. We also found that users understand
explanations referring to categorical features more readily than those
referring to continuous features. We discuss the implications of these findings
for current and future counterfactual methods in XAI.
Related papers
- Incremental XAI: Memorable Understanding of AI with Incremental Explanations [13.460427339680168]
We propose to provide more detailed explanations by leveraging the human cognitive capacity to accumulate knowledge by incrementally receiving more details.
We introduce Incremental XAI to automatically partition explanations for general and atypical instances.
Memorability is improved by reusing base factors and reducing the number of factors shown in atypical cases.
arXiv Detail & Related papers (2024-04-10T04:38:17Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Predictability and Comprehensibility in Post-Hoc XAI Methods: A
User-Centered Analysis [6.606409729669314]
Post-hoc explainability methods aim to clarify predictions of black-box machine learning models.
We conduct a user study to evaluate comprehensibility and predictability in two widely used tools: LIME and SHAP.
We find that the comprehensibility of SHAP is significantly reduced when explanations are provided for samples near a model's decision boundary.
arXiv Detail & Related papers (2023-09-21T11:54:20Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - The Unreliability of Explanations in Few-Shot In-Context Learning [50.77996380021221]
We focus on two NLP tasks that involve reasoning over text, namely question answering and natural language inference.
We show that explanations judged as good by humans--those that are logically consistent with the input--usually indicate more accurate predictions.
We present a framework for calibrating model predictions based on the reliability of the explanations.
arXiv Detail & Related papers (2022-05-06T17:57:58Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - A Taxonomy of Explainable Bayesian Networks [0.0]
We introduce a taxonomy of explainability in Bayesian networks.
We extend the existing categorisation of explainability in the model, reasoning or evidence to include explanation of decisions.
arXiv Detail & Related papers (2021-01-28T07:29:57Z) - Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA [22.76153284711981]
We study whether explanations help users correctly decide when to accept or reject an ODQA system's answer.
Our results show that explanations derived from retrieved evidence passages can outperform strong baselines (calibrated confidence) across modalities.
We show common failure cases of current explanations, emphasize end-to-end evaluation of explanations, and caution against evaluating them in proxy modalities that are different from deployment.
arXiv Detail & Related papers (2020-12-30T08:19:02Z) - Don't Explain without Verifying Veracity: An Evaluation of Explainable
AI with Video Activity Recognition [24.10997778856368]
This paper explores how explanation veracity affects user performance and agreement in intelligent systems.
We compare variations in explanation veracity for a video review and querying task.
Results suggest that low veracity explanations significantly decrease user performance and agreement.
arXiv Detail & Related papers (2020-05-05T17:06:46Z) - SCOUT: Self-aware Discriminant Counterfactual Explanations [78.79534272979305]
The problem of counterfactual visual explanations is considered.
A new family of discriminant explanations is introduced.
The resulting counterfactual explanations are optimization free and thus much faster than previous methods.
arXiv Detail & Related papers (2020-04-16T17:05:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.