Counterfactual Explanations for Predictive Business Process Monitoring
- URL: http://arxiv.org/abs/2202.12018v1
- Date: Thu, 24 Feb 2022 11:01:20 GMT
- Title: Counterfactual Explanations for Predictive Business Process Monitoring
- Authors: Tsung-Hao Huang, Andreas Metzger, Klaus Pohl
- Abstract summary: We propose LORELEY, a counterfactual explanation technique for predictive process monitoring.
LORELEY can approximate prediction models with an average fidelity of 97.69% and generate realistic counterfactual explanations.
- Score: 0.90238471756546
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Predictive business process monitoring increasingly leverages sophisticated
prediction models. Although sophisticated models achieve consistently higher
prediction accuracy than simple models, one major drawback is their lack of
interpretability, which limits their adoption in practice. We thus see growing
interest in explainable predictive business process monitoring, which aims to
increase the interpretability of prediction models. Existing solutions focus on
giving factual explanations.While factual explanations can be helpful, humans
typically do not ask why a particular prediction was made, but rather why it
was made instead of another prediction, i.e., humans are interested in
counterfactual explanations. While research in explainable AI produced several
promising techniques to generate counterfactual explanations, directly applying
them to predictive process monitoring may deliver unrealistic explanations,
because they ignore the underlying process constraints. We propose LORELEY, a
counterfactual explanation technique for predictive process monitoring, which
extends LORE, a recent explainable AI technique. We impose control flow
constraints to the explanation generation process to ensure realistic
counterfactual explanations. Moreover, we extend LORE to enable explaining
multi-class classification models. Experimental results using a real, public
dataset indicate that LORELEY can approximate the prediction models with an
average fidelity of 97.69\% and generate realistic counterfactual explanations.
Related papers
- XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - Counterfactual Explanations for Deep Learning-Based Traffic Forecasting [42.31238891397725]
This study aims to leverage an Explainable AI approach, counterfactual explanations, to enhance the explainability and usability of deep learning-based traffic forecasting models.
The study first implements a deep learning model to predict traffic speed based on historical traffic data and contextual variables.
Counterfactual explanations are then used to illuminate how alterations in these input variables affect predicted outcomes.
arXiv Detail & Related papers (2024-05-01T11:26:31Z) - Generating Feasible and Plausible Counterfactual Explanations for Outcome Prediction of Business Processes [45.502284864662585]
We introduce a data-driven approach, REVISEDplus, to generate plausible counterfactual explanations.
First, we restrict the counterfactual algorithm to generate counterfactuals that lie within a high-density region of the process data.
We also ensure plausibility by learning sequential patterns between the activities in the process cases.
arXiv Detail & Related papers (2024-03-14T09:56:35Z) - Counterfactuals of Counterfactuals: a back-translation-inspired approach
to analyse counterfactual editors [3.4253416336476246]
We focus on the analysis of counterfactual, contrastive explanations.
We propose a new back translation-inspired evaluation methodology.
We show that by iteratively feeding the counterfactual to the explainer we can obtain valuable insights into the behaviour of both the predictor and the explainer models.
arXiv Detail & Related papers (2023-05-26T16:04:28Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Training Deep Models to be Explained with Fewer Examples [40.58343220792933]
We train prediction and explanation models simultaneously with a sparse regularizer for reducing the number of examples.
Experiments using several datasets demonstrate that the proposed method improves faithfulness while keeping the predictive performance.
arXiv Detail & Related papers (2021-12-07T05:39:21Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Model extraction from counterfactual explanations [68.8204255655161]
We show how an adversary can leverage the information provided by counterfactual explanations to build high-fidelity and high-accuracy model extraction attacks.
Our attack enables the adversary to build a faithful copy of a target model by accessing its counterfactual explanations.
arXiv Detail & Related papers (2020-09-03T19:02:55Z) - Explaining the Behavior of Black-Box Prediction Algorithms with Causal
Learning [9.279259759707996]
Causal approaches to post-hoc explainability for black-box prediction models have become increasingly popular.
We learn causal graphical representations that allow for arbitrary unmeasured confounding among features.
Our approach is motivated by a counterfactual theory of causal explanation wherein good explanations point to factors that are "difference-makers" in an interventionist sense.
arXiv Detail & Related papers (2020-06-03T19:02:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.