Formal Translation from Reversing Petri Nets to Coloured Petri Nets
- URL: http://arxiv.org/abs/2311.00629v1
- Date: Wed, 1 Nov 2023 16:28:38 GMT
- Title: Formal Translation from Reversing Petri Nets to Coloured Petri Nets
- Authors: Kamila Barylska, Anna Gogolinska, Lukasz Mikulski, Anna Philippou,
Marcin Piatkowski, Kyriaki Psara
- Abstract summary: Reversing Petri nets are a recently-proposed extension of Petri nets that implements the three main forms of reversibility, namely, backtracking, causal reversing, and out-of-causal-order reversing.
We have proposed a structural translation from a subclass of RPNs to the model of Coloured Petri Nets (CPNs), an extension of traditional Petri nets where tokens carry data values.
In this paper, we extend the translation to handle RPNs with token multiplicity under the individual-token interpretation, a model which allows multiple tokens of the same type to exist in a system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reversible computation is an emerging computing paradigm that allows any
sequence of operations to be executed in reverse order at any point during
computation. Its appeal lies in its potential for lowpower computation and its
relevance to a wide array of applications such as chemical reactions, quantum
computation, robotics, and distributed systems. Reversing Petri nets are a
recently-proposed extension of Petri nets that implements the three main forms
of reversibility, namely, backtracking, causal reversing, and
out-of-causal-order reversing. Their distinguishing feature is the use of named
tokens that can be combined together to form bonds. Named tokens along with a
history function, constitute the means of remembering past behaviour, thus,
enabling reversal. In recent work, we have proposed a structural translation
from a subclass of RPNs to the model of Coloured Petri Nets (CPNs), an
extension of traditional Petri nets where tokens carry data values. In this
paper, we extend the translation to handle RPNs with token multiplicity under
the individual-token interpretation, a model which allows multiple tokens of
the same type to exist in a system. To support the three types of
reversibility, tokens are associated with their causal history and, while
tokens of the same type are equally eligible to fire a transition when going
forward, when going backwards they are able to reverse only the transitions
they have previously fired. The new translation, in addition to lifting the
restriction on token uniqueness, presents a refined approach for transforming
RPNs to CPNs through a unifying approach that allows instantiating each of the
three types of reversibility. The paper also reports on a tool that implements
this translation, paving the way for automated translations and analysis of
reversible systems using CPN Tools.
Related papers
- Retro-FPN: Retrospective Feature Pyramid Network for Point Cloud
Semantic Segmentation [65.78483246139888]
We propose Retro-FPN to model the per-point feature prediction as an explicit and retrospective refining process.
Its key novelty is a retro-transformer for summarizing semantic contexts from the previous layer.
We show that Retro-FPN can significantly improve performance over state-of-the-art backbones.
arXiv Detail & Related papers (2023-08-18T05:28:25Z) - Sentence Embedding Leaks More Information than You Expect: Generative
Embedding Inversion Attack to Recover the Whole Sentence [37.63047048491312]
We propose a generative embedding inversion attack (GEIA) that aims to reconstruct input sequences based only on their sentence embeddings.
Given the black-box access to a language model, we treat sentence embeddings as initial tokens' representations and train or fine-tune a powerful decoder model to decode the whole sequences directly.
arXiv Detail & Related papers (2023-05-04T17:31:41Z) - Principled Paraphrase Generation with Parallel Corpora [52.78059089341062]
We formalize the implicit similarity function induced by round-trip Machine Translation.
We show that it is susceptible to non-paraphrase pairs sharing a single ambiguous translation.
We design an alternative similarity metric that mitigates this issue.
arXiv Detail & Related papers (2022-05-24T17:22:42Z) - Improving language models by retrieving from trillions of tokens [50.42630445476544]
We enhance auto-regressive language models by conditioning on document chunks retrieved from a large corpus.
With a $2$ trillion token database, our Retrieval-Enhanced Transformer (RETRO) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile.
arXiv Detail & Related papers (2021-12-08T17:32:34Z) - Acyclic and Cyclic Reversing Computations in Petri Nets [0.0]
reversible computations constitute an unconventional form of computing where any sequence of performed operations can be undone by executing in reverse order at any point during a computation.
We have proposed a structural way of translating Reversing Petri Nets (RPNs) to bounded Coloured Petri Nets (CPNs)
Three reversing semantics are possible in RPNs: backtracking (reversing of the lately executed action), causal reversing (action can be reversed only when all its effects have been undone) and out of causal reversing (any previously performed action can be reversed)
arXiv Detail & Related papers (2021-08-04T16:50:14Z) - Duplex Sequence-to-Sequence Learning for Reversible Machine Translation [53.924941333388155]
Sequence-to-sequence (seq2seq) problems such as machine translation are bidirectional.
We propose a em duplex seq2seq neural network, REDER, and apply it to machine translation.
Experiments on widely-used machine translation benchmarks verify that REDER achieves the first success of reversible machine translation.
arXiv Detail & Related papers (2021-05-07T18:21:57Z) - Neural Syntactic Preordering for Controlled Paraphrase Generation [57.5316011554622]
Our work uses syntactic transformations to softly "reorder'' the source sentence and guide our neural paraphrasing model.
First, given an input sentence, we derive a set of feasible syntactic rearrangements using an encoder-decoder model.
Next, we use each proposed rearrangement to produce a sequence of position embeddings, which encourages our final encoder-decoder paraphrase model to attend to the source words in a particular order.
arXiv Detail & Related papers (2020-05-05T09:02:25Z) - Non-Autoregressive Machine Translation with Disentangled Context
Transformer [70.95181466892795]
State-of-the-art neural machine translation models generate a translation from left to right and every step is conditioned on the previously generated tokens.
We propose an attention-masking based model, called Disentangled Context (DisCo) transformer, that simultaneously generates all tokens given different contexts.
Our model achieves competitive, if not better, performance compared to the state of the art in non-autoregressive machine translation while significantly reducing decoding time on average.
arXiv Detail & Related papers (2020-01-15T05:32:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.