Synchronizing Process Model and Event Abstraction for Grounded Process Intelligence (Extended Version)
- URL: http://arxiv.org/abs/2505.23536v1
- Date: Thu, 29 May 2025 15:15:23 GMT
- Title: Synchronizing Process Model and Event Abstraction for Grounded Process Intelligence (Extended Version)
- Authors: Janik-Vasily Benzin, Gyunam Park, Stefanie Rinderle-Ma,
- Abstract summary: Model abstraction (MA) and event abstraction (EA) are means to reduce complexity of (discovered) models and event data.<n>We provide the formal basis for synchronized model and event abstraction.<n>We prove the feasibility of our approach based on behavioral profile abstraction as non-order preserving MA technique.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model abstraction (MA) and event abstraction (EA) are means to reduce complexity of (discovered) models and event data. Imagine a process intelligence project that aims to analyze a model discovered from event data which is further abstracted, possibly multiple times, to reach optimality goals, e.g., reducing model size. So far, after discovering the model, there is no technique that enables the synchronized abstraction of the underlying event log. This results in loosing the grounding in the real-world behavior contained in the log and, in turn, restricts analysis insights. Hence, in this work, we provide the formal basis for synchronized model and event abstraction, i.e., we prove that abstracting a process model by MA and discovering a process model from an abstracted event log yields an equivalent process model. We prove the feasibility of our approach based on behavioral profile abstraction as non-order preserving MA technique, resulting in a novel EA technique.
Related papers
- Conditional Latent Diffusion Models for Zero-Shot Instance Segmentation [16.225638630932675]
OC-DiT is a class of diffusion models designed for object-centric prediction.<n>We propose a conditional latent diffusion framework that generates instance masks.<n>We train these models on a newly created, large-scale synthetic dataset.
arXiv Detail & Related papers (2025-08-06T06:38:46Z) - Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - INEXA: Interactive and Explainable Process Model Abstraction Through Object-Centric Process Mining [0.0]
We propose INEXA, an interactive, explainable process model abstraction method that keeps the link to the event log.
As a starting point, INEXA aggregates large process models to a "displayable" size, e.g., for the manufacturing use case to a process model with 58 model elements.
arXiv Detail & Related papers (2024-03-27T15:03:33Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Process Discovery Using Graph Neural Networks [2.6381163133447836]
We introduce a technique for training an ML-based model D using graphal neural networks.
D translates a given input event log into a sound Petri net.
We show that training D on synthetically generated pairs of input logs and output models allows D to translate previously unseen synthetic and several real-life event logs into sound.
arXiv Detail & Related papers (2021-09-13T10:04:34Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Learning Accurate Business Process Simulation Models from Event Logs via
Automated Process Discovery and Deep Learning [0.8164433158925593]
Data-Driven Simulation (DDS) methods learn process simulation models from event logs.
Deep Learning (DL) models are able to accurately capture such temporal dynamics.
This paper presents a hybrid approach to learn process simulation models from event logs.
arXiv Detail & Related papers (2021-03-22T15:34:57Z) - Model-Invariant State Abstractions for Model-Based Reinforcement
Learning [54.616645151708994]
We introduce a new type of state abstraction called textitmodel-invariance.
This allows for generalization to novel combinations of unseen values of state variables.
We prove that an optimal policy can be learned over this model-invariance state abstraction.
arXiv Detail & Related papers (2021-02-19T10:37:54Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z) - Data from Model: Extracting Data from Non-robust and Robust Models [83.60161052867534]
This work explores the reverse process of generating data from a model, attempting to reveal the relationship between the data and the model.
We repeat the process of Data to Model (DtM) and Data from Model (DfM) in sequence and explore the loss of feature mapping information.
Our results show that the accuracy drop is limited even after multiple sequences of DtM and DfM, especially for robust models.
arXiv Detail & Related papers (2020-07-13T05:27:48Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.