An Explainable Decision Support System for Predictive Process Analytics
- URL: http://arxiv.org/abs/2207.12782v1
- Date: Tue, 26 Jul 2022 09:55:49 GMT
- Title: An Explainable Decision Support System for Predictive Process Analytics
- Authors: Riccardo Galanti, Massimiliano de Leoni, Merylin Monaro, Nicol\`o
Navarin, Alan Marazzi, Brigida Di Stasi, St\'ephanie Maldera
- Abstract summary: This paper proposes a predictive analytics framework that is also equipped with explanation capabilities based on the game theory of Shapley Values.
The framework has been implemented in the IBM Process Mining suite and commercialized for business users.
- Score: 0.41562334038629595
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictive Process Analytics is becoming an essential aid for organizations,
providing online operational support of their processes. However, process
stakeholders need to be provided with an explanation of the reasons why a given
process execution is predicted to behave in a certain way. Otherwise, they will
be unlikely to trust the predictive monitoring technology and, hence, adopt it.
This paper proposes a predictive analytics framework that is also equipped with
explanation capabilities based on the game theory of Shapley Values. The
framework has been implemented in the IBM Process Mining suite and
commercialized for business users. The framework has been tested on real-life
event data to assess the quality of the predictions and the corresponding
evaluations. In particular, a user evaluation has been performed in order to
understand if the explanations provided by the system were intelligible to
process stakeholders.
Related papers
- Achieving Fairness in Predictive Process Analytics via Adversarial Learning [50.31323204077591]
This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics.
Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value.
arXiv Detail & Related papers (2024-10-03T15:56:03Z) - Generating Feasible and Plausible Counterfactual Explanations for Outcome Prediction of Business Processes [45.502284864662585]
We introduce a data-driven approach, REVISEDplus, to generate plausible counterfactual explanations.
First, we restrict the counterfactual algorithm to generate counterfactuals that lie within a high-density region of the process data.
We also ensure plausibility by learning sequential patterns between the activities in the process cases.
arXiv Detail & Related papers (2024-03-14T09:56:35Z) - Communicating Uncertainty in Machine Learning Explanations: A
Visualization Analytics Approach for Predictive Process Monitoring [0.0]
This study explores how model uncertainty can be effectively communicated in global and local post-hoc explanation approaches.
By combining these two research directions, decision-makers can not only justify the plausibility of explanation-driven actionable insights but also validate their reliability.
arXiv Detail & Related papers (2023-04-12T09:44:32Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Explainable Predictive Process Monitoring: A User Evaluation [62.41400549499849]
Explainability is motivated by the lack of transparency of black-box Machine Learning approaches.
We carry on a user evaluation on explanation approaches for Predictive Process Monitoring.
arXiv Detail & Related papers (2022-02-15T22:24:21Z) - Explainable AI Enabled Inspection of Business Process Prediction Models [2.5229940062544496]
We present an approach that allows us to use model explanations to investigate certain reasoning applied by machine learned predictions.
A novel contribution of our approach is the proposal of model inspection that leverages both the explanations generated by interpretable machine learning mechanisms and the contextual or domain knowledge extracted from event logs that record historical process execution.
arXiv Detail & Related papers (2021-07-16T06:51:18Z) - Evaluating Explainable Methods for Predictive Process Analytics: A
Functionally-Grounded Approach [2.2448567386846916]
Predictive process analytics focuses on predicting the future states of running instances of a business process.
Current explainable machine learning methods, such as LIME and SHAP, can be used to interpret black box models.
We apply the proposed metrics to evaluate the performance of LIME and SHAP in interpreting process predictive models built on XGBoost.
arXiv Detail & Related papers (2020-12-08T05:05:19Z) - Explainable Predictive Process Monitoring [0.5564793925574796]
This paper tackles the problem of equipping predictive business process monitoring with explanation capabilities.
We use the game theory of Shapley Values to obtain robust explanations of the predictions.
The approach has been implemented and tested on real-life benchmarks, showing for the first time how explanations can be given in the field of predictive business process monitoring.
arXiv Detail & Related papers (2020-08-04T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.