Simulating Petri nets with Boolean Matrix Logic Programming
- URL: http://arxiv.org/abs/2405.11412v1
- Date: Sat, 18 May 2024 23:17:00 GMT
- Title: Simulating Petri nets with Boolean Matrix Logic Programming
- Authors: Lun Ai, Stephen H. Muggleton, Shi-Shun Liang, Geoff S. Baldwin,
- Abstract summary: We introduce a novel approach to deal with the limitations of high-level symbol manipulations.
Within this framework, we propose two novel BMLP algorithms for a class of Petri nets known as elementary nets.
We demonstrate empirically that BMLP algorithms can evaluate these programs 40 times faster than tabled B-Prolog, SWI-Prolog, XSB-Prolog and Clingo.
- Score: 4.762323642506732
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent attention to relational knowledge bases has sparked a demand for understanding how relations change between entities. Petri nets can represent knowledge structure and dynamically simulate interactions between entities, and thus they are well suited for achieving this goal. However, logic programs struggle to deal with extensive Petri nets due to the limitations of high-level symbol manipulations. To address this challenge, we introduce a novel approach called Boolean Matrix Logic Programming (BMLP), utilising boolean matrices as an alternative computation mechanism for Prolog to evaluate logic programs. Within this framework, we propose two novel BMLP algorithms for simulating a class of Petri nets known as elementary nets. This is done by transforming elementary nets into logically equivalent datalog programs. We demonstrate empirically that BMLP algorithms can evaluate these programs 40 times faster than tabled B-Prolog, SWI-Prolog, XSB-Prolog and Clingo. Our work enables the efficient simulation of elementary nets using Prolog, expanding the scope of analysis, learning and verification of complex systems with logic programming techniques.
Related papers
- Boolean Matrix Logic Programming [5.847084649531298]
We describe a datalog query evaluation approach based on efficient and composable matrix modules.
We develop two novel BMLP modules for bottom-up manipulation on linear dyadic recursive datalog programs.
Our empirical results demonstrate that these modules outperform general-purpose and specialised systems by factors of 30x and 9x, respectively.
arXiv Detail & Related papers (2024-08-19T19:26:49Z) - Learning to Estimate System Specifications in Linear Temporal Logic using Transformers and Mamba [6.991281327290525]
specification mining involves extracting temporal logic formulae from system traces.
We introduce autore models that can generate linear temporal logic formulae from traces.
We devise a metric for the distinctiveness of the generated formulae and an algorithm to enforce the syntax constraints.
arXiv Detail & Related papers (2024-05-31T15:21:53Z) - Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - Can Language Models Pretend Solvers? Logic Code Simulation with LLMs [3.802945676202634]
Transformer-based large language models (LLMs) have demonstrated significant potential in addressing logic problems.
This study delves into a novel aspect, namely logic code simulation, which forces LLMs to emulate logical solvers in predicting the results of logical programs.
arXiv Detail & Related papers (2024-03-24T11:27:16Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - Evaluating Step-by-Step Reasoning through Symbolic Verification [20.156768135017007]
Pre-trained language models (LMs) have shown remarkable reasoning performance for in-context learning.
LMLP enjoys more than $25%$ higher accuracy than chain-of-thoughts (CoT) on length generalization benchmarks even with smaller model sizes.
arXiv Detail & Related papers (2022-12-16T19:30:01Z) - Discourse-Aware Graph Networks for Textual Logical Reasoning [142.0097357999134]
Passage-level logical relations represent entailment or contradiction between propositional units (e.g., a concluding sentence)
We propose logic structural-constraint modeling to solve the logical reasoning QA and introduce discourse-aware graph networks (DAGNs)
The networks first construct logic graphs leveraging in-line discourse connectives and generic logic theories, then learn logic representations by end-to-end evolving the logic relations with an edge-reasoning mechanism and updating the graph features.
arXiv Detail & Related papers (2022-07-04T14:38:49Z) - Logical blocks for fault-tolerant topological quantum computation [55.41644538483948]
We present a framework for universal fault-tolerant logic motivated by the need for platform-independent logical gate definitions.
We explore novel schemes for universal logic that improve resource overheads.
Motivated by the favorable logical error rates for boundaryless computation, we introduce a novel computational scheme.
arXiv Detail & Related papers (2021-12-22T19:00:03Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.