Shapelet-Based Counterfactual Explanations for Multivariate Time Series
- URL: http://arxiv.org/abs/2208.10462v1
- Date: Mon, 22 Aug 2022 17:33:31 GMT
- Title: Shapelet-Based Counterfactual Explanations for Multivariate Time Series
- Authors: Omar Bahri, Soukaina Filali Boubrahimi, Shah Muhammad Hamdi
- Abstract summary: We develop a model agnostic multivariate time series (MTS) counterfactual explanation algorithm.
We test our approach on a real-life solar flare prediction dataset and prove that our approach produces high-quality counterfactuals.
In addition to being visually interpretable, our explanations are superior in terms of proximity, sparsity, and plausibility.
- Score: 0.9990687944474738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning and deep learning models have become highly prevalent in
a multitude of domains, the main reservation in their adoption for
decision-making processes is their black-box nature. The Explainable Artificial
Intelligence (XAI) paradigm has gained a lot of momentum lately due to its
ability to reduce models opacity. XAI methods have not only increased
stakeholders' trust in the decision process but also helped developers ensure
its fairness. Recent efforts have been invested in creating transparent models
and post-hoc explanations. However, fewer methods have been developed for time
series data, and even less when it comes to multivariate datasets. In this
work, we take advantage of the inherent interpretability of shapelets to
develop a model agnostic multivariate time series (MTS) counterfactual
explanation algorithm. Counterfactuals can have a tremendous impact on making
black-box models explainable by indicating what changes have to be performed on
the input to change the final decision. We test our approach on a real-life
solar flare prediction dataset and prove that our approach produces
high-quality counterfactuals. Moreover, a comparison to the only MTS
counterfactual generation algorithm shows that, in addition to being visually
interpretable, our explanations are superior in terms of proximity, sparsity,
and plausibility.
Related papers
- SynthTree: Co-supervised Local Model Synthesis for Explainable Prediction [15.832975722301011]
We propose a novel method to enhance explainability with minimal accuracy loss.
We have developed novel methods for estimating nodes by leveraging AI techniques.
Our findings highlight the critical role that statistical methodologies can play in advancing explainable AI.
arXiv Detail & Related papers (2024-06-16T14:43:01Z) - Unified Explanations in Machine Learning Models: A Perturbation Approach [0.0]
Inconsistencies between XAI and modeling techniques can have the undesirable effect of casting doubt upon the efficacy of these explainability approaches.
We propose a systematic, perturbation-based analysis against a popular, model-agnostic method in XAI, SHapley Additive exPlanations (Shap)
We devise algorithms to generate relative feature importance in settings of dynamic inference amongst a suite of popular machine learning and deep learning methods, and metrics that allow us to quantify how well explanations generated under the static case hold.
arXiv Detail & Related papers (2024-05-30T16:04:35Z) - Overlap Number of Balls Model-Agnostic CounterFactuals (ONB-MACF): A Data-Morphology-based Counterfactual Generation Method for Trustworthy Artificial Intelligence [15.415120542032547]
XAI seeks to make AI systems more understandable and trustworthy.
This work analyses the value of data morphology strategies in generating counterfactual explanations.
It introduces the Overlap Number of Balls Model-Agnostic CounterFactuals (ONB-MACF) method.
arXiv Detail & Related papers (2024-05-20T18:51:42Z) - T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients [5.946429628497358]
We introduce T-Explainer, a novel local additive attribution explainer based on Taylor expansion.
It has desirable properties, such as local accuracy and consistency, making T-Explainer stable over multiple runs.
arXiv Detail & Related papers (2024-04-25T10:40:49Z) - MinT: Boosting Generalization in Mathematical Reasoning via Multi-View
Fine-Tuning [53.90744622542961]
Reasoning in mathematical domains remains a significant challenge for small language models (LMs)
We introduce a new method that exploits existing mathematical problem datasets with diverse annotation styles.
Experimental results show that our strategy enables a LLaMA-7B model to outperform prior approaches.
arXiv Detail & Related papers (2023-07-16T05:41:53Z) - Motif-guided Time Series Counterfactual Explanations [1.1510009152620664]
We propose a novel model that generates intuitive post-hoc counterfactual explanations.
We validated our model using five real-world time-series datasets from the UCR repository.
arXiv Detail & Related papers (2022-11-08T17:56:50Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.