Towards a Rigorous Evaluation of Explainability for Multivariate Time
Series
- URL: http://arxiv.org/abs/2104.04075v1
- Date: Tue, 6 Apr 2021 17:16:36 GMT
- Title: Towards a Rigorous Evaluation of Explainability for Multivariate Time
Series
- Authors: Rohit Saluja, Avleen Malhi, Samanta Knapi\v{c}, Kary Fr\"amling, Cicek
Cavdar
- Abstract summary: This study was to achieve and evaluate model agnostic explainability in a time series forecasting problem.
The solution involved framing the problem as a time series forecasting problem to predict the sales deals.
The explanations produced by LIME and SHAP greatly helped lay humans in understanding the predictions made by the machine learning model.
- Score: 5.786452383826203
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Machine learning-based systems are rapidly gaining popularity and in-line
with that there has been a huge research surge in the field of explainability
to ensure that machine learning models are reliable, fair, and can be held
liable for their decision-making process. Explainable Artificial Intelligence
(XAI) methods are typically deployed to debug black-box machine learning models
but in comparison to tabular, text, and image data, explainability in time
series is still relatively unexplored. The aim of this study was to achieve and
evaluate model agnostic explainability in a time series forecasting problem.
This work focused on proving a solution for a digital consultancy company
aiming to find a data-driven approach in order to understand the effect of
their sales related activities on the sales deals closed. The solution involved
framing the problem as a time series forecasting problem to predict the sales
deals and the explainability was achieved using two novel model agnostic
explainability techniques, Local explainable model-agnostic explanations (LIME)
and Shapley additive explanations (SHAP) which were evaluated using human
evaluation of explainability. The results clearly indicate that the
explanations produced by LIME and SHAP greatly helped lay humans in
understanding the predictions made by the machine learning model. The presented
work can easily be extended to any time
Related papers
- XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations [1.0370398945228227]
We introduce LaPLACE-explainer, designed to provide probabilistic cause-and-effect explanations for machine learning models.
The LaPLACE-Explainer component leverages the concept of a Markov blanket to establish statistical boundaries between relevant and non-relevant features.
Our approach offers causal explanations and outperforms LIME and SHAP in terms of local accuracy and consistency of explained features.
arXiv Detail & Related papers (2023-10-01T04:09:59Z) - Predictability and Comprehensibility in Post-Hoc XAI Methods: A
User-Centered Analysis [6.606409729669314]
Post-hoc explainability methods aim to clarify predictions of black-box machine learning models.
We conduct a user study to evaluate comprehensibility and predictability in two widely used tools: LIME and SHAP.
We find that the comprehensibility of SHAP is significantly reduced when explanations are provided for samples near a model's decision boundary.
arXiv Detail & Related papers (2023-09-21T11:54:20Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Explainable AI Enabled Inspection of Business Process Prediction Models [2.5229940062544496]
We present an approach that allows us to use model explanations to investigate certain reasoning applied by machine learned predictions.
A novel contribution of our approach is the proposal of model inspection that leverages both the explanations generated by interpretable machine learning mechanisms and the contextual or domain knowledge extracted from event logs that record historical process execution.
arXiv Detail & Related papers (2021-07-16T06:51:18Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - TimeSHAP: Explaining Recurrent Models through Sequence Perturbations [3.1498833540989413]
Recurrent neural networks are a standard building block in numerous machine learning domains.
The complex decision-making in these models is seen as a black-box, creating a tension between accuracy and interpretability.
In this work, we contribute to filling these gaps by presenting TimeSHAP, a model-agnostic recurrent explainer.
arXiv Detail & Related papers (2020-11-30T19:48:57Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.