Evaluating Explainable Methods for Predictive Process Analytics: A
Functionally-Grounded Approach
- URL: http://arxiv.org/abs/2012.04218v1
- Date: Tue, 8 Dec 2020 05:05:19 GMT
- Title: Evaluating Explainable Methods for Predictive Process Analytics: A
Functionally-Grounded Approach
- Authors: Mythreyi Velmurugan, Chun Ouyang, Catarina Moreira and Renuka
Sindhgatta
- Abstract summary: Predictive process analytics focuses on predicting the future states of running instances of a business process.
Current explainable machine learning methods, such as LIME and SHAP, can be used to interpret black box models.
We apply the proposed metrics to evaluate the performance of LIME and SHAP in interpreting process predictive models built on XGBoost.
- Score: 2.2448567386846916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predictive process analytics focuses on predicting the future states of
running instances of a business process. While advanced machine learning
techniques have been used to increase accuracy of predictions, the resulting
predictive models lack transparency. Current explainable machine learning
methods, such as LIME and SHAP, can be used to interpret black box models.
However, it is unclear how fit for purpose these methods are in explaining
process predictive models. In this paper, we draw on evaluation measures used
in the field of explainable AI and propose functionally-grounded evaluation
metrics for assessing explainable methods in predictive process analytics. We
apply the proposed metrics to evaluate the performance of LIME and SHAP in
interpreting process predictive models built on XGBoost, which has been shown
to be relatively accurate in process predictions. We conduct the evaluation
using three open source, real-world event logs and analyse the evaluation
results to derive insights. The research contributes to understanding the
trustworthiness of explainable methods for predictive process analytics as a
fundamental and key step towards human user-oriented evaluation.
Related papers
- A Probabilistic Perspective on Unlearning and Alignment for Large Language Models [48.96686419141881]
We introduce the first formal probabilistic evaluation framework in Large Language Models (LLMs)
We derive novel metrics with high-probability guarantees concerning the output distribution of a model.
Our metrics are application-independent and allow practitioners to make more reliable estimates about model capabilities before deployment.
arXiv Detail & Related papers (2024-10-04T15:44:23Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - Generalizing Backpropagation for Gradient-Based Interpretability [103.2998254573497]
We show that the gradient of a model is a special case of a more general formulation using semirings.
This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics.
arXiv Detail & Related papers (2023-07-06T15:19:53Z) - Model Predictive Control with Gaussian-Process-Supported Dynamical
Constraints for Autonomous Vehicles [82.65261980827594]
We propose a model predictive control approach for autonomous vehicles that exploits learned Gaussian processes for predicting human driving behavior.
A multi-mode predictive control approach considers the possible intentions of the human drivers.
arXiv Detail & Related papers (2023-03-08T17:14:57Z) - An Explainable Decision Support System for Predictive Process Analytics [0.41562334038629595]
This paper proposes a predictive analytics framework that is also equipped with explanation capabilities based on the game theory of Shapley Values.
The framework has been implemented in the IBM Process Mining suite and commercialized for business users.
arXiv Detail & Related papers (2022-07-26T09:55:49Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Building Interpretable Models for Business Process Prediction using
Shared and Specialised Attention Mechanisms [5.607831842909669]
We address the "black-box" problem in predictive process analytics by building interpretable models.
We propose two types of attentions: event attention to capture the impact of specific process events on a prediction, and attribute attention to reveal which attribute(s) of an event influenced the prediction.
arXiv Detail & Related papers (2021-09-03T10:17:05Z) - Interpreting Process Predictions using a Milestone-Aware Counterfactual
Approach [0.0]
We explore the use of a popular model-agnostic counterfactual algorithm, DiCE, in the context of predictive process analytics.
The analysis reveals that the algorithm is limited when being applied to derive explanations of process predictions.
We propose an approach that supports deriving milestone-aware counterfactuals at different stages of a trace to promote interpretability.
arXiv Detail & Related papers (2021-07-19T09:14:16Z) - Explainable AI Enabled Inspection of Business Process Prediction Models [2.5229940062544496]
We present an approach that allows us to use model explanations to investigate certain reasoning applied by machine learned predictions.
A novel contribution of our approach is the proposal of model inspection that leverages both the explanations generated by interpretable machine learning mechanisms and the contextual or domain knowledge extracted from event logs that record historical process execution.
arXiv Detail & Related papers (2021-07-16T06:51:18Z) - Developing a Fidelity Evaluation Approach for Interpretable Machine
Learning [2.2448567386846916]
Explainable AI (XAI) methods are used to improve the interpretability of complex models.
In particular, methods to evaluate the fidelity of the explanation to the underlying black box require further development.
Our evaluations suggest that the internal mechanism of the underlying predictive model, the internal mechanism of the explainable method used and model and data complexity all affect explanation fidelity.
arXiv Detail & Related papers (2021-06-16T00:21:16Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.