Explainable AI Enabled Inspection of Business Process Prediction Models
- URL: http://arxiv.org/abs/2107.09767v1
- Date: Fri, 16 Jul 2021 06:51:18 GMT
- Title: Explainable AI Enabled Inspection of Business Process Prediction Models
- Authors: Chun Ouyang, Renuka Sindhgatta, Catarina Moreira
- Abstract summary: We present an approach that allows us to use model explanations to investigate certain reasoning applied by machine learned predictions.
A novel contribution of our approach is the proposal of model inspection that leverages both the explanations generated by interpretable machine learning mechanisms and the contextual or domain knowledge extracted from event logs that record historical process execution.
- Score: 2.5229940062544496
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Modern data analytics underpinned by machine learning techniques has become a
key enabler to the automation of data-led decision making. As an important
branch of state-of-the-art data analytics, business process predictions are
also faced with a challenge in regard to the lack of explanation to the
reasoning and decision by the underlying `black-box' prediction models. With
the development of interpretable machine learning techniques, explanations can
be generated for a black-box model, making it possible for (human) users to
access the reasoning behind machine learned predictions. In this paper, we aim
to present an approach that allows us to use model explanations to investigate
certain reasoning applied by machine learned predictions and detect potential
issues with the underlying methods thus enhancing trust in business process
prediction models. A novel contribution of our approach is the proposal of
model inspection that leverages both the explanations generated by
interpretable machine learning mechanisms and the contextual or domain
knowledge extracted from event logs that record historical process execution.
Findings drawn from this work are expected to serve as a key input to
developing model reliability metrics and evaluation in the context of business
process predictions.
Related papers
- Attention Please: What Transformer Models Really Learn for Process Prediction [0.0]
This paper examines whether the attention scores of a transformer based next-activity prediction model can serve as an explanation for its decision-making.
We find that attention scores in next-activity prediction models can serve as explainers and exploit this fact in two proposed graph-based explanation approaches.
arXiv Detail & Related papers (2024-08-12T08:20:38Z) - Generating Feasible and Plausible Counterfactual Explanations for Outcome Prediction of Business Processes [45.502284864662585]
We introduce a data-driven approach, REVISEDplus, to generate plausible counterfactual explanations.
First, we restrict the counterfactual algorithm to generate counterfactuals that lie within a high-density region of the process data.
We also ensure plausibility by learning sequential patterns between the activities in the process cases.
arXiv Detail & Related papers (2024-03-14T09:56:35Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Building Interpretable Models for Business Process Prediction using
Shared and Specialised Attention Mechanisms [5.607831842909669]
We address the "black-box" problem in predictive process analytics by building interpretable models.
We propose two types of attentions: event attention to capture the impact of specific process events on a prediction, and attribute attention to reveal which attribute(s) of an event influenced the prediction.
arXiv Detail & Related papers (2021-09-03T10:17:05Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Evaluating Explainable Methods for Predictive Process Analytics: A
Functionally-Grounded Approach [2.2448567386846916]
Predictive process analytics focuses on predicting the future states of running instances of a business process.
Current explainable machine learning methods, such as LIME and SHAP, can be used to interpret black box models.
We apply the proposed metrics to evaluate the performance of LIME and SHAP in interpreting process predictive models built on XGBoost.
arXiv Detail & Related papers (2020-12-08T05:05:19Z) - Forethought and Hindsight in Credit Assignment [62.05690959741223]
We work to understand the gains and peculiarities of planning employed as forethought via forward models or as hindsight operating with backward models.
We investigate the best use of models in planning, primarily focusing on the selection of states in which predictions should be (re)-evaluated.
arXiv Detail & Related papers (2020-10-26T16:00:47Z) - Local Post-Hoc Explanations for Predictive Process Monitoring in
Manufacturing [0.0]
This study proposes an innovative explainable predictive quality analytics solution to facilitate data-driven decision-making in manufacturing.
It combines process mining, machine learning, and explainable artificial intelligence (XAI) methods.
arXiv Detail & Related papers (2020-09-22T13:07:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.