Predictability and Comprehensibility in Post-Hoc XAI Methods: A
User-Centered Analysis
- URL: http://arxiv.org/abs/2309.11987v1
- Date: Thu, 21 Sep 2023 11:54:20 GMT
- Title: Predictability and Comprehensibility in Post-Hoc XAI Methods: A
User-Centered Analysis
- Authors: Anahid Jalali, Bernhard Haslhofer, Simone Kriglstein, Andreas Rauber
- Abstract summary: Post-hoc explainability methods aim to clarify predictions of black-box machine learning models.
We conduct a user study to evaluate comprehensibility and predictability in two widely used tools: LIME and SHAP.
We find that the comprehensibility of SHAP is significantly reduced when explanations are provided for samples near a model's decision boundary.
- Score: 6.606409729669314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Post-hoc explainability methods aim to clarify predictions of black-box
machine learning models. However, it is still largely unclear how well users
comprehend the provided explanations and whether these increase the users
ability to predict the model behavior. We approach this question by conducting
a user study to evaluate comprehensibility and predictability in two widely
used tools: LIME and SHAP. Moreover, we investigate the effect of
counterfactual explanations and misclassifications on users ability to
understand and predict the model behavior. We find that the comprehensibility
of SHAP is significantly reduced when explanations are provided for samples
near a model's decision boundary. Furthermore, we find that counterfactual
explanations and misclassifications can significantly increase the users
understanding of how a machine learning model is making decisions. Based on our
findings, we also derive design recommendations for future post-hoc
explainability methods with increased comprehensibility and predictability.
Related papers
- Counterfactual Explanations for Deep Learning-Based Traffic Forecasting [42.31238891397725]
This study aims to leverage an Explainable AI approach, counterfactual explanations, to enhance the explainability and usability of deep learning-based traffic forecasting models.
The study first implements a deep learning model to predict traffic speed based on historical traffic data and contextual variables.
Counterfactual explanations are then used to illuminate how alterations in these input variables affect predicted outcomes.
arXiv Detail & Related papers (2024-05-01T11:26:31Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Towards a Rigorous Evaluation of Explainability for Multivariate Time
Series [5.786452383826203]
This study was to achieve and evaluate model agnostic explainability in a time series forecasting problem.
The solution involved framing the problem as a time series forecasting problem to predict the sales deals.
The explanations produced by LIME and SHAP greatly helped lay humans in understanding the predictions made by the machine learning model.
arXiv Detail & Related papers (2021-04-06T17:16:36Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - DECE: Decision Explorer with Counterfactual Explanations for Machine
Learning Models [36.50754934147469]
We exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models.
We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets.
arXiv Detail & Related papers (2020-08-19T09:44:47Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.