Explainable Artificial Intelligence for Process Mining: A General
Overview and Application of a Novel Local Explanation Approach for Predictive
Process Monitoring
- URL: http://arxiv.org/abs/2009.02098v2
- Date: Sat, 12 Sep 2020 07:24:36 GMT
- Title: Explainable Artificial Intelligence for Process Mining: A General
Overview and Application of a Novel Local Explanation Approach for Predictive
Process Monitoring
- Authors: Nijat Mehdiyev and Peter Fettke
- Abstract summary: This study proposes a conceptual framework sought to establish and promote understanding of decision-making environment.
This study defines the local regions from the validation dataset by using the intermediate latent space representations.
The adopted deep learning classifier achieves a good performance with the Area Under the ROC Curve of 0.94.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The contemporary process-aware information systems possess the capabilities
to record the activities generated during the process execution. To leverage
these process specific fine-granular data, process mining has recently emerged
as a promising research discipline. As an important branch of process mining,
predictive business process management, pursues the objective to generate
forward-looking, predictive insights to shape business processes. In this
study, we propose a conceptual framework sought to establish and promote
understanding of decision-making environment, underlying business processes and
nature of the user characteristics for developing explainable business process
prediction solutions. Consequently, with regard to the theoretical and
practical implications of the framework, this study proposes a novel local
post-hoc explanation approach for a deep learning classifier that is expected
to facilitate the domain experts in justifying the model decisions. In contrary
to alternative popular perturbation-based local explanation approaches, this
study defines the local regions from the validation dataset by using the
intermediate latent space representations learned by the deep neural networks.
To validate the applicability of the proposed explanation method, the real-life
process log data delivered by the Volvo IT Belgium's incident management system
are used.The adopted deep learning classifier achieves a good performance with
the Area Under the ROC Curve of 0.94. The generated local explanations are also
visualized and presented with relevant evaluation measures that are expected to
increase the users' trust in the black-box-model.
Related papers
- WISE: Unraveling Business Process Metrics with Domain Knowledge [0.0]
Anomalies in complex industrial processes are often obscured by high variability and complexity of event data.
We introduce WISE, a novel method for analyzing business process metrics through the integration of domain knowledge, process mining, and machine learning.
We show that WISE enhances automation in business process analysis and effectively detects deviations from desired process flows.
arXiv Detail & Related papers (2024-10-06T07:57:08Z) - Generating Feasible and Plausible Counterfactual Explanations for Outcome Prediction of Business Processes [45.502284864662585]
We introduce a data-driven approach, REVISEDplus, to generate plausible counterfactual explanations.
First, we restrict the counterfactual algorithm to generate counterfactuals that lie within a high-density region of the process data.
We also ensure plausibility by learning sequential patterns between the activities in the process cases.
arXiv Detail & Related papers (2024-03-14T09:56:35Z) - Learning Transferable Conceptual Prototypes for Interpretable
Unsupervised Domain Adaptation [79.22678026708134]
In this paper, we propose an inherently interpretable method, named Transferable Prototype Learning ( TCPL)
To achieve this goal, we design a hierarchically prototypical module that transfers categorical basic concepts from the source domain to the target domain and learns domain-shared prototypes for explaining the underlying reasoning process.
Comprehensive experiments show that the proposed method can not only provide effective and intuitive explanations but also outperform previous state-of-the-arts.
arXiv Detail & Related papers (2023-10-12T06:36:41Z) - Quantifying and Explaining Machine Learning Uncertainty in Predictive
Process Monitoring: An Operations Research Perspective [0.0]
This paper introduces a comprehensive, multi-stage machine learning methodology that integrates information systems and artificial intelligence.
The proposed framework adeptly addresses common limitations of existing solutions, such as the neglect of data-driven estimation.
Our approach employs Quantile Regression Forests for generating interval predictions, alongside both local and global variants of SHapley Additive Explanations.
arXiv Detail & Related papers (2023-04-13T11:18:22Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Prescriptive Process Monitoring: Quo Vadis? [64.39761523935613]
The paper studies existing methods in this field via a Systematic Literature Review ( SLR)
The SLR provides insights into challenges and areas for future research that could enhance the usefulness and applicability of prescriptive process monitoring methods.
arXiv Detail & Related papers (2021-12-03T08:06:24Z) - Explainable AI Enabled Inspection of Business Process Prediction Models [2.5229940062544496]
We present an approach that allows us to use model explanations to investigate certain reasoning applied by machine learned predictions.
A novel contribution of our approach is the proposal of model inspection that leverages both the explanations generated by interpretable machine learning mechanisms and the contextual or domain knowledge extracted from event logs that record historical process execution.
arXiv Detail & Related papers (2021-07-16T06:51:18Z) - Local Post-Hoc Explanations for Predictive Process Monitoring in
Manufacturing [0.0]
This study proposes an innovative explainable predictive quality analytics solution to facilitate data-driven decision-making in manufacturing.
It combines process mining, machine learning, and explainable artificial intelligence (XAI) methods.
arXiv Detail & Related papers (2020-09-22T13:07:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.