Can I Trust My Simulation Model? Measuring the Quality of Business
Process Simulation Models
- URL: http://arxiv.org/abs/2303.17463v1
- Date: Thu, 30 Mar 2023 15:40:26 GMT
- Title: Can I Trust My Simulation Model? Measuring the Quality of Business
Process Simulation Models
- Authors: David Chapela-Campa, Ismail Benchekroun, Opher Baron, Marlon Dumas,
Dmitry Krass, Arik Senderovich
- Abstract summary: Business Process Simulation (BPS) is an approach to analyze the performance of business processes under different scenarios.
We propose a collection of measures to evaluate the quality of a BPS model.
- Score: 1.4027589547318842
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Business Process Simulation (BPS) is an approach to analyze the performance
of business processes under different scenarios. For example, BPS allows us to
estimate what would be the cycle time of a process if one or more resources
became unavailable. The starting point of BPS is a process model annotated with
simulation parameters (a BPS model). BPS models may be manually designed, based
on information collected from stakeholders and empirical observations, or
automatically discovered from execution data. Regardless of its origin, a key
question when using a BPS model is how to assess its quality. In this paper, we
propose a collection of measures to evaluate the quality of a BPS model w.r.t.
its ability to replicate the observed behavior of the process. We advocate an
approach whereby different measures tackle different process perspectives. We
evaluate the ability of the proposed measures to discern the impact of
modifications to a BPS model, and their ability to uncover the relative
strengths and weaknesses of two approaches for automated discovery of BPS
models. The evaluation shows that the measures not only capture how close a BPS
model is to the observed behavior, but they also help us to identify sources of
discrepancies.
Related papers
- Generative Discrete Event Process Simulation for Hidden Markov Models to Predict Competitor Time-to-Market [0.0]
We show how Firm A can build a model that predicts when Firm B will be ready to sell its product.
We study the question of how many resource observations Firm A requires in order to accurately assess the current state of development at Firm B.
arXiv Detail & Related papers (2024-11-06T21:17:38Z) - Discovery and Simulation of Data-Aware Business Processes [0.28675177318965045]
This paper introduces a data-aware BPS modeling approach and a method to discover data-aware BPS models from event logs.
The resulting BPS models more closely replicate the process execution control flow relative to data-unaware BPS models.
arXiv Detail & Related papers (2024-08-24T20:13:00Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - AgentSimulator: An Agent-based Approach for Data-driven Business Process Simulation [6.590869939300887]
Business process simulation (BPS) is a versatile technique for estimating process performance across various scenarios.
This paper introduces AgentSimulator, a resource-first BPS approach that discovers a multi-agent system from an event log.
Our experiments show that AgentSimulator achieves computation state-of-the-art simulation accuracy with significantly lower times than existing approaches.
arXiv Detail & Related papers (2024-08-16T07:19:11Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Gaussian Process Probes (GPP) for Uncertainty-Aware Probing [61.91898698128994]
We introduce a unified and simple framework for probing and measuring uncertainty about concepts represented by models.
Our experiments show it can (1) probe a model's representations of concepts even with a very small number of examples, (2) accurately measure both epistemic uncertainty (how confident the probe is) and aleatory uncertainty (how fuzzy the concepts are to the model), and (3) detect out of distribution data using those uncertainty measures as well as classic methods do.
arXiv Detail & Related papers (2023-05-29T17:00:16Z) - Enhancing Business Process Simulation Models with Extraneous Activity
Delays [0.6073572808831218]
This article proposes a method that discovers extraneous delays from event logs of business process executions.
The proposed approach computes, for each pair of causally consecutive activity instances in the event log, the time when the target activity instance should theoretically have started.
An empirical evaluation involving synthetic and real-life logs shows that the approach produces BPS models that better reflect the temporal dynamics of the process.
arXiv Detail & Related papers (2022-06-28T14:51:10Z) - XAI in the context of Predictive Process Monitoring: Too much to Reveal [3.10770247120758]
Predictive Process Monitoring (PPM) has been integrated into process mining tools as a value-adding task.
XAI methods are employed to compensate for the lack of transparency of most efficient predictive models.
A comparison is missing to distinguish XAI characteristics or underlying conditions that are deterministic to an explanation.
arXiv Detail & Related papers (2022-02-16T15:31:59Z) - Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in
Partially Observed Markov Decision Processes [65.91730154730905]
In applications of offline reinforcement learning to observational data, such as in healthcare or education, a general concern is that observed actions might be affected by unobserved factors.
Here we tackle this by considering off-policy evaluation in a partially observed Markov decision process (POMDP)
We extend the framework of proximal causal inference to our POMDP setting, providing a variety of settings where identification is made possible.
arXiv Detail & Related papers (2021-10-28T17:46:14Z) - How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating
and Auditing Generative Models [95.8037674226622]
We introduce a 3-dimensional evaluation metric that characterizes the fidelity, diversity and generalization performance of any generative model in a domain-agnostic fashion.
Our metric unifies statistical divergence measures with precision-recall analysis, enabling sample- and distribution-level diagnoses of model fidelity and diversity.
arXiv Detail & Related papers (2021-02-17T18:25:30Z) - Probabilistic Case-based Reasoning for Open-World Knowledge Graph
Completion [59.549664231655726]
A case-based reasoning (CBR) system solves a new problem by retrieving cases' that are similar to the given problem.
In this paper, we demonstrate that such a system is achievable for reasoning in knowledge-bases (KBs)
Our approach predicts attributes for an entity by gathering reasoning paths from similar entities in the KB.
arXiv Detail & Related papers (2020-10-07T17:48:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.