Petri Nets with Parameterised Data: Modelling and Verification (Extended
Version)
- URL: http://arxiv.org/abs/2006.06630v1
- Date: Thu, 11 Jun 2020 17:26:08 GMT
- Title: Petri Nets with Parameterised Data: Modelling and Verification (Extended
Version)
- Authors: Silvio Ghilardi, Alessandro Gianola, Marco Montali, Andrey Rivkin
- Abstract summary: We introduce and study an extension of coloured Petri nets, called catalog-nets, providing two key features to capture this type of processes.
We show that fresh-value injection is a particularly complex feature to handle, and discuss strategies to tame it.
- Score: 67.99023219822564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: During the last decade, various approaches have been put forward to integrate
business processes with different types of data. Each of such approaches
reflects specific demands in the whole process-data integration spectrum. One
particular important point is the capability of these approaches to flexibly
accommodate processes with multiple cases that need to co-evolve. In this work,
we introduce and study an extension of coloured Petri nets, called
catalog-nets, providing two key features to capture this type of processes. On
the one hand, net transitions are equipped with guards that simultaneously
inspect the content of tokens and query facts stored in a read-only, persistent
database. On the other hand, such transitions can inject data into tokens by
extracting relevant values from the database or by generating genuinely fresh
ones. We systematically encode catalog-nets into one of the reference
frameworks for the (parameterised) verification of data and processes. We show
that fresh-value injection is a particularly complex feature to handle, and
discuss strategies to tame it. Finally, we discuss how catalog nets relate to
well-known formalisms in this area.
Related papers
- Generative Retrieval Meets Multi-Graded Relevance [104.75244721442756]
We introduce a framework called GRaded Generative Retrieval (GR$2$)
GR$2$ focuses on two key components: ensuring relevant and distinct identifiers, and implementing multi-graded constrained contrastive training.
Experiments on datasets with both multi-graded and binary relevance demonstrate the effectiveness of GR$2$.
arXiv Detail & Related papers (2024-09-27T02:55:53Z) - ToolACE: Winning the Points of LLM Function Calling [139.07157814653638]
ToolACE is an automatic agentic pipeline designed to generate accurate, complex, and diverse tool-learning data.
We demonstrate that models trained on our synthesized data, even with only 8B parameters, achieve state-of-the-art performance on the Berkeley Function-Calling Leaderboard.
arXiv Detail & Related papers (2024-09-02T03:19:56Z) - Flow with FlorDB: Incremental Context Maintenance for the Machine Learning Lifecycle [9.424552130799661]
We present techniques to harvest and query arbitrary metadata from machine learning pipelines.
We show how hindsight logging allows such statements to be added and executed post-hoc.
This is done in a "metadata later style" off the critical path of agile development.
arXiv Detail & Related papers (2024-08-05T14:21:00Z) - An Integrated Data Processing Framework for Pretraining Foundation Models [57.47845148721817]
Researchers and practitioners often have to manually curate datasets from difference sources.
We propose a data processing framework that integrates a Processing Module and an Analyzing Module.
The proposed framework is easy to use and highly flexible.
arXiv Detail & Related papers (2024-02-26T07:22:51Z) - Selecting Walk Schemes for Database Embedding [6.7609045625714925]
We study the embedding of components of a relational database.
We focus on the recent FoRWaRD algorithm that is designed for dynamic databases.
We show that by focusing on a few informative walk schemes, we can obtain embedding significantly faster, while retaining the quality.
arXiv Detail & Related papers (2024-01-20T11:39:32Z) - SoK: Privacy-Preserving Data Synthesis [72.92263073534899]
This paper focuses on privacy-preserving data synthesis (PPDS) by providing a comprehensive overview, analysis, and discussion of the field.
We put forth a master recipe that unifies two prominent strands of research in PPDS: statistical methods and deep learning (DL)-based methods.
arXiv Detail & Related papers (2023-07-05T08:29:31Z) - Enjoy the Silence: Analysis of Stochastic Petri Nets with Silent
Transitions [4.163635746713724]
Capturing behaviors in business and work processes is essential to quantitatively understand how nondeterminism is resolved when taking decisions within the process.
This is of special interest in process mining, where event data tracking the actual execution of the process are related to process models.
Variants of Petri nets provide a natural formal basis for this, but they need to be labelled with (possibly duplicated) activities and equipped with silent transitions.
We show that all such analysis tasks can be solved analytically, in particular reducing them to a single method that combines automata-based techniques to single out the behaviors of interest within a LSP
arXiv Detail & Related papers (2023-06-10T07:57:24Z) - Automatic Validation of Textual Attribute Values in E-commerce Catalog
by Learning with Limited Labeled Data [61.789797281676606]
We propose a novel meta-learning latent variable approach, called MetaBridge.
It can learn transferable knowledge from a subset of categories with limited labeled data.
It can capture the uncertainty of never-seen categories with unlabeled data.
arXiv Detail & Related papers (2020-06-15T21:31:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.