XAI in the context of Predictive Process Monitoring: Too much to Reveal
- URL: http://arxiv.org/abs/2202.08265v1
- Date: Wed, 16 Feb 2022 15:31:59 GMT
- Title: XAI in the context of Predictive Process Monitoring: Too much to Reveal
- Authors: Ghada Elkhawaga, Mervat Abuelkheir, Manfred Reichert
- Abstract summary: Predictive Process Monitoring (PPM) has been integrated into process mining tools as a value-adding task.
XAI methods are employed to compensate for the lack of transparency of most efficient predictive models.
A comparison is missing to distinguish XAI characteristics or underlying conditions that are deterministic to an explanation.
- Score: 3.10770247120758
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Predictive Process Monitoring (PPM) has been integrated into process mining
tools as a value-adding task. PPM provides useful predictions on the further
execution of the running business processes. To this end, machine
learning-based techniques are widely employed in the context of PPM. In order
to gain stakeholders trust and advocacy of PPM predictions, eXplainable
Artificial Intelligence (XAI) methods are employed in order to compensate for
the lack of transparency of most efficient predictive models. Even when
employed under the same settings regarding data, preprocessing techniques, and
ML models, explanations generated by multiple XAI methods differ profoundly. A
comparison is missing to distinguish XAI characteristics or underlying
conditions that are deterministic to an explanation. To address this gap, we
provide a framework to enable studying the effect of different PPM-related
settings and ML model-related choices on characteristics and expressiveness of
resulting explanations. In addition, we compare how different explainability
methods characteristics can shape resulting explanations and enable reflecting
underlying model reasoning process
Related papers
- Efficient Learning of POMDPs with Known Observation Model in Average-Reward Setting [56.92178753201331]
We propose the Observation-Aware Spectral (OAS) estimation technique, which enables the POMDP parameters to be learned from samples collected using a belief-based policy.
We show the consistency of the OAS procedure, and we prove a regret guarantee of order $mathcalO(sqrtT log(T)$ for the proposed OAS-UCRL algorithm.
arXiv Detail & Related papers (2024-10-02T08:46:34Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Robustness of Explainable Artificial Intelligence in Industrial Process Modelling [43.388607981317016]
We evaluate current XAI methods by scoring them based on ground truth simulations and sensitivity analysis.
We show the differences between XAI methods in their ability to correctly predict the true sensitivity of the modeled industrial process.
arXiv Detail & Related papers (2024-07-12T09:46:26Z) - A Mechanistic Interpretation of Arithmetic Reasoning in Language Models
using Causal Mediation Analysis [128.0532113800092]
We present a mechanistic interpretation of Transformer-based LMs on arithmetic questions.
This provides insights into how information related to arithmetic is processed by LMs.
arXiv Detail & Related papers (2023-05-24T11:43:47Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Explainability of Predictive Process Monitoring Results: Can You See My
Data Issues? [3.10770247120758]
Predictive business process monitoring (PPM) has been around for several years as a use case of process mining.
We study how differences in resulting explanations may indicate several issues in underlying data.
arXiv Detail & Related papers (2022-02-16T13:14:02Z) - Locally Interpretable Model Agnostic Explanations using Gaussian
Processes [2.9189409618561966]
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique for explaining the prediction of a single instance.
We propose a Gaussian Process (GP) based variation of locally interpretable models.
We demonstrate that the proposed technique is able to generate faithful explanations using much fewer samples as compared to LIME.
arXiv Detail & Related papers (2021-08-16T05:49:01Z) - CoCoMoT: Conformance Checking of Multi-Perspective Processes via SMT
(Extended Version) [62.96267257163426]
We introduce the CoCoMoT (Computing Conformance Modulo Theories) framework.
First, we show how SAT-based encodings studied in the pure control-flow setting can be lifted to our data-aware case.
Second, we introduce a novel preprocessing technique based on a notion of property-preserving clustering.
arXiv Detail & Related papers (2021-03-18T20:22:50Z) - Evaluating Explainable Methods for Predictive Process Analytics: A
Functionally-Grounded Approach [2.2448567386846916]
Predictive process analytics focuses on predicting the future states of running instances of a business process.
Current explainable machine learning methods, such as LIME and SHAP, can be used to interpret black box models.
We apply the proposed metrics to evaluate the performance of LIME and SHAP in interpreting process predictive models built on XGBoost.
arXiv Detail & Related papers (2020-12-08T05:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.