Motif-guided Time Series Counterfactual Explanations
- URL: http://arxiv.org/abs/2211.04411v3
- Date: Thu, 1 Feb 2024 21:32:04 GMT
- Title: Motif-guided Time Series Counterfactual Explanations
- Authors: Peiyu Li, Soukaina Filali Boubrahimi, Shah Muhammad Hamdi
- Abstract summary: We propose a novel model that generates intuitive post-hoc counterfactual explanations.
We validated our model using five real-world time-series datasets from the UCR repository.
- Score: 1.1510009152620664
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rising need of interpretable machine learning methods, there is a
necessity for a rise in human effort to provide diverse explanations of the
influencing factors of the model decisions. To improve the trust and
transparency of AI-based systems, the EXplainable Artificial Intelligence (XAI)
field has emerged. The XAI paradigm is bifurcated into two main categories:
feature attribution and counterfactual explanation methods. While feature
attribution methods are based on explaining the reason behind a model decision,
counterfactual explanation methods discover the smallest input changes that
will result in a different decision. In this paper, we aim at building trust
and transparency in time series models by using motifs to generate
counterfactual explanations. We propose Motif-Guided Counterfactual Explanation
(MG-CF), a novel model that generates intuitive post-hoc counterfactual
explanations that make full use of important motifs to provide interpretive
information in decision-making processes. To the best of our knowledge, this is
the first effort that leverages motifs to guide the counterfactual explanation
generation. We validated our model using five real-world time-series datasets
from the UCR repository. Our experimental results show the superiority of MG-CF
in balancing all the desirable counterfactual explanations properties in
comparison with other competing state-of-the-art baselines.
Related papers
- Info-CELS: Informative Saliency Map Guided Counterfactual Explanation [1.25828876338076]
A novel counterfactual explanation model, CELS, has been introduced.
CELS learns a saliency map for the interest of an instance and generates a counterfactual explanation guided by the learned saliency map.
We present an enhanced approach that builds upon CELS.
arXiv Detail & Related papers (2024-10-27T18:12:02Z) - Explainability for Machine Learning Models: From Data Adaptability to
User Perception [0.8702432681310401]
This thesis explores the generation of local explanations for already deployed machine learning models.
It aims to identify optimal conditions for producing meaningful explanations considering both data and user requirements.
arXiv Detail & Related papers (2024-02-16T18:44:37Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Shapelet-Based Counterfactual Explanations for Multivariate Time Series [0.9990687944474738]
We develop a model agnostic multivariate time series (MTS) counterfactual explanation algorithm.
We test our approach on a real-life solar flare prediction dataset and prove that our approach produces high-quality counterfactuals.
In addition to being visually interpretable, our explanations are superior in terms of proximity, sparsity, and plausibility.
arXiv Detail & Related papers (2022-08-22T17:33:31Z) - CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
Human Trust in Image Recognition Models [84.32751938563426]
We propose a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN)
In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process.
Our framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.
arXiv Detail & Related papers (2021-09-03T09:46:20Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Explanation of Reinforcement Learning Model in Dynamic Multi-Agent
System [3.754171077237216]
This paper reports a novel work in generating verbal explanations for DRL behaviors agent.
A learning model is proposed to expand the implicit logic of generating verbal explanation to general situations.
Results show that verbal explanation generated by both models improve subjective satisfaction of users towards the interpretability of DRL systems.
arXiv Detail & Related papers (2020-08-04T13:21:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.