Process Modeling With Large Language Models
- URL: http://arxiv.org/abs/2403.07541v2
- Date: Mon, 8 Apr 2024 13:20:38 GMT
- Title: Process Modeling With Large Language Models
- Authors: Humam Kourani, Alessandro Berti, Daniel Schuster, Wil M. P. van der Aalst,
- Abstract summary: This paper explores the integration of Large Language Models (LLMs) into process modeling.
We propose a framework that leverages LLMs for the automated generation and iterative refinement of process models.
Preliminary results demonstrate the framework's ability to streamline process modeling tasks.
- Score: 42.0652924091318
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the realm of Business Process Management (BPM), process modeling plays a crucial role in translating complex process dynamics into comprehensible visual representations, facilitating the understanding, analysis, improvement, and automation of organizational processes. Traditional process modeling methods often require extensive expertise and can be time-consuming. This paper explores the integration of Large Language Models (LLMs) into process modeling to enhance the accessibility of process modeling, offering a more intuitive entry point for non-experts while augmenting the efficiency of experts. We propose a framework that leverages LLMs for the automated generation and iterative refinement of process models starting from textual descriptions. Our framework involves innovative prompting strategies for effective LLM utilization, along with a secure model generation protocol and an error-handling mechanism. Moreover, we instantiate a concrete system extending our framework. This system provides robust quality guarantees on the models generated and supports exporting them in standard modeling notations, such as the Business Process Modeling Notation (BPMN) and Petri nets. Preliminary results demonstrate the framework's ability to streamline process modeling tasks, underscoring the transformative potential of generative AI in the BPM field.
Related papers
- Towards Synthetic Trace Generation of Modeling Operations using In-Context Learning Approach [1.8874331450711404]
We propose a conceptual framework that combines modeling event logs, intelligent modeling assistants, and the generation of modeling operations.
In particular, the architecture comprises modeling components that help the designer specify the system, record its operation within a graphical modeling environment, and automatically recommend relevant operations.
arXiv Detail & Related papers (2024-08-26T13:26:44Z) - Leveraging Large Language Models for Enhanced Process Model Comprehension [33.803742664323856]
In Business Process Management (BPM), effectively comprehending process models is crucial yet poses significant challenges.
This paper introduces a novel framework utilizing the advanced capabilities of Large Language Models (LLMs) to enhance the interpretability of complex process models.
arXiv Detail & Related papers (2024-08-08T13:12:46Z) - MAO: A Framework for Process Model Generation with Multi-Agent Orchestration [12.729855942941724]
This article explores a framework for automatically generating process models with multi-agent orchestration (MAO)
Large language models are prone to hallucinations, so the agents need to review and repair semantic hallucinations in process models.
Experiments demonstrate that the process models generated by our framework surpass manual modeling by 89%, 61%, 52%, and 75% on four different datasets.
arXiv Detail & Related papers (2024-08-04T03:32:17Z) - A process algebraic framework for multi-agent dynamic epistemic systems [55.2480439325792]
We propose a unifying framework for modeling and analyzing multi-agent, knowledge-based, dynamic systems.
On the modeling side, we propose a process algebraic, agent-oriented specification language that makes such a framework easy to use for practical purposes.
arXiv Detail & Related papers (2024-07-24T08:35:50Z) - ProMoAI: Process Modeling with Generative AI [42.0652924091318]
ProMoAI is a novel tool that leverages Large Language Models (LLMs) to automatically generate process models from textual descriptions.
The tool also incorporates advanced prompt engineering, error handling, and code generation techniques.
arXiv Detail & Related papers (2024-03-07T08:48:04Z) - Model Composition for Multimodal Large Language Models [71.5729418523411]
We propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model.
Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters.
arXiv Detail & Related papers (2024-02-20T06:38:10Z) - Scaling Vision-Language Models with Sparse Mixture of Experts [128.0882767889029]
We show that mixture-of-experts (MoE) techniques can achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost.
Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling vision-language models.
arXiv Detail & Related papers (2023-03-13T16:00:31Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Generating Hidden Markov Models from Process Models Through Nonnegative Tensor Factorization [0.0]
We introduce a novel mathematically sound method that integrates theoretical process models with interrelated minimal Hidden Markov Models.
Our method consolidates: (a) theoretical process models, (b) HMMs, (c) coupled nonnegative matrix-tensor factorizations, and (d) custom model selection.
arXiv Detail & Related papers (2022-10-03T16:19:27Z) - Extending Process Discovery with Model Complexity Optimization and
Cyclic States Identification: Application to Healthcare Processes [62.997667081978825]
The paper presents an approach to process mining providing semi-automatic support to model optimization.
A model simplification approach is proposed, which essentially abstracts the raw model at the desired granularity.
We aim to demonstrate the capabilities of the technological solution using three datasets from different applications in the healthcare domain.
arXiv Detail & Related papers (2022-06-10T16:20:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.