Automatically Reconciling the Trade-off between Prediction Accuracy and
Earliness in Prescriptive Business Process Monitoring
- URL: http://arxiv.org/abs/2307.05939v1
- Date: Wed, 12 Jul 2023 06:07:53 GMT
- Title: Automatically Reconciling the Trade-off between Prediction Accuracy and
Earliness in Prescriptive Business Process Monitoring
- Authors: Andreas Metzger, Tristan Kley, Aristide Rothweiler, Klaus Pohl
- Abstract summary: We focus on the problem of automatically reconciling the trade-off between prediction accuracy and prediction earliness.
Different approaches were presented in the literature to reconcile the trade-off between prediction accuracy and earliness.
We perform a comparative evaluation of the main alternative approaches for reconciling the trade-off between prediction accuracy and earliness.
- Score: 0.802904964931021
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Prescriptive business process monitoring provides decision support to process
managers on when and how to adapt an ongoing business process to prevent or
mitigate an undesired process outcome. We focus on the problem of automatically
reconciling the trade-off between prediction accuracy and prediction earliness
in determining when to adapt. Adaptations should happen sufficiently early to
provide enough lead time for the adaptation to become effective. However,
earlier predictions are typically less accurate than later predictions. This
means that acting on less accurate predictions may lead to unnecessary
adaptations or missed adaptations.
Different approaches were presented in the literature to reconcile the
trade-off between prediction accuracy and earliness. So far, these approaches
were compared with different baselines, and evaluated using different data sets
or even confidential data sets. This limits the comparability and replicability
of the approaches and makes it difficult to choose a concrete approach in
practice.
We perform a comparative evaluation of the main alternative approaches for
reconciling the trade-off between prediction accuracy and earliness. Using four
public real-world event log data sets and two types of prediction models, we
assess and compare the cost savings of these approaches. The experimental
results indicate which criteria affect the effectiveness of an approach and
help us state initial recommendations for the selection of a concrete approach
in practice.
Related papers
- Achieving Fairness in Predictive Process Analytics via Adversarial Learning [50.31323204077591]
This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics.
Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value.
arXiv Detail & Related papers (2024-10-03T15:56:03Z) - Inference-Time Selective Debiasing [27.578390085427156]
We propose selective debiasing -- an inference-time safety mechanism that aims to increase the overall quality of models.
We identify the potentially biased model predictions and, instead of discarding them, we debias them using LEACE -- a post-processing debiasing method.
Experiments with text classification datasets demonstrate that selective debiasing helps to close the performance gap between post-processing methods and at-training and pre-processing debiasing techniques.
arXiv Detail & Related papers (2024-07-27T21:56:23Z) - Source-Free Unsupervised Domain Adaptation with Hypothesis Consolidation
of Prediction Rationale [53.152460508207184]
Source-Free Unsupervised Domain Adaptation (SFUDA) is a challenging task where a model needs to be adapted to a new domain without access to target domain labels or source domain data.
This paper proposes a novel approach that considers multiple prediction hypotheses for each sample and investigates the rationale behind each hypothesis.
To achieve the optimal performance, we propose a three-step adaptation process: model pre-adaptation, hypothesis consolidation, and semi-supervised learning.
arXiv Detail & Related papers (2024-02-02T05:53:22Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Calibrated Selective Classification [34.08454890436067]
We develop a new approach to selective classification in which we propose a method for rejecting examples with "uncertain" uncertainties.
We present a framework for learning selectively calibrated models, where a separate selector network is trained to improve the selective calibration error of a given base model.
We demonstrate the empirical effectiveness of our approach on multiple image classification and lung cancer risk assessment tasks.
arXiv Detail & Related papers (2022-08-25T13:31:09Z) - How to Evaluate Uncertainty Estimates in Machine Learning for
Regression? [1.4610038284393165]
We show that both approaches to evaluating the quality of uncertainty estimates have serious flaws.
Firstly, both approaches cannot disentangle the separate components that jointly create the predictive uncertainty.
Thirdly, the current approach to test prediction intervals directly has additional flaws.
arXiv Detail & Related papers (2021-06-07T07:47:46Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.