Comparing Ordering Strategies For Process Discovery Using Synthesis
Rules
- URL: http://arxiv.org/abs/2301.02182v1
- Date: Wed, 4 Jan 2023 16:17:52 GMT
- Title: Comparing Ordering Strategies For Process Discovery Using Synthesis
Rules
- Authors: Tsung-Hao Huang and Wil M. P. van der Aalst
- Abstract summary: Process discovery aims to learn process models from observed behaviors.
In this paper, we investigate the effect of different ordering strategies on the discovered models.
- Score: 0.5330240017302619
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Process discovery aims to learn process models from observed behaviors, i.e.,
event logs, in the information systems.The discovered models serve as the
starting point for process mining techniques that are used to address
performance and compliance problems. Compared to the state-of-the-art Inductive
Miner, the algorithm applying synthesis rules from the free-choice net theory
discovers process models with more flexible (non-block) structures while
ensuring the same desirable soundness and free-choiceness properties. Moreover,
recent development in this line of work shows that the discovered models have
compatible quality. Following the synthesis rules, the algorithm incrementally
modifies an existing process model by adding the activities in the event log
one at a time. As the applications of rules are highly dependent on the
existing model structure, the model quality and computation time are
significantly influenced by the order of adding activities. In this paper, we
investigate the effect of different ordering strategies on the discovered
models (w.r.t. fitness and precision) and the computation time using real-life
event data. The results show that the proposed ordering strategy can improve
the quality of the resulting process models while requiring less time compared
to the ordering strategy solely based on the frequency of activities.
Related papers
- Pattern based learning and optimisation through pricing for bin packing problem [50.83768979636913]
We argue that when problem conditions such as the distributions of random variables change, the patterns that performed well in previous circumstances may become less effective.
We propose a novel scheme to efficiently identify patterns and dynamically quantify their values for each specific condition.
Our method quantifies the value of patterns based on their ability to satisfy constraints and their effects on the objective value.
arXiv Detail & Related papers (2024-08-27T17:03:48Z) - Model-Free Active Exploration in Reinforcement Learning [53.786439742572995]
We study the problem of exploration in Reinforcement Learning and present a novel model-free solution.
Our strategy is able to identify efficient policies faster than state-of-the-art exploration approaches.
arXiv Detail & Related papers (2024-06-30T19:00:49Z) - Mining a Minimal Set of Behavioral Patterns using Incremental Evaluation [3.16536213610547]
Existing approaches to behavioral pattern mining suffer from two limitations.
First, they show limited scalability as incremental computation is incorporated only in the generation of pattern candidates.
Second, process analysis based on mined patterns shows limited effectiveness due to an overwhelmingly large number of patterns obtained in practical application scenarios.
arXiv Detail & Related papers (2024-02-05T11:41:37Z) - The WHY in Business Processes: Discovery of Causal Execution Dependencies [2.0811729303868005]
Unraveling causal relationships among the execution of process activities is a crucial element in predicting the consequences of process interventions.
This work offers a systematic approach to the unveiling of the causal business process by leveraging an existing causal discovery algorithm over activity timing.
Our methodology searches for such discrepancies between the two models in the context of three causal patterns, and derives a new view in which these inconsistencies are annotated over the mined process model.
arXiv Detail & Related papers (2023-10-23T14:23:15Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Which Model To Trust: Assessing the Influence of Models on the
Performance of Reinforcement Learning Algorithms for Continuous Control Tasks [0.0]
It is not clear how much of the recent progress is due to improved algorithms or due to improved models.
A set of commonly adopted models is established for the purpose of model comparison.
Results reveal significant differences in model performance do exist.
arXiv Detail & Related papers (2021-10-25T16:17:26Z) - Active Learning of Markov Decision Processes using Baum-Welch algorithm
(Extended) [0.0]
This paper revisits and adapts the classic Baum-Welch algorithm for learning Markov decision processes and Markov chains.
We empirically compare our approach with state-of-the-art tools and demonstrate that the proposed active learning procedure can significantly reduce the number of observations required to obtain accurate models.
arXiv Detail & Related papers (2021-10-06T18:54:19Z) - Process Discovery for Structured Program Synthesis [70.29027202357385]
A core task in process mining is process discovery which aims to learn an accurate process model from event log data.
In this paper, we propose to use (block-) structured programs directly as target process models.
We develop a novel bottom-up agglomerative approach to the discovery of such structured program process models.
arXiv Detail & Related papers (2020-08-13T10:33:10Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.