Acyclic and Cyclic Reversing Computations in Petri Nets
- URL: http://arxiv.org/abs/2108.02167v1
- Date: Wed, 4 Aug 2021 16:50:14 GMT
- Title: Acyclic and Cyclic Reversing Computations in Petri Nets
- Authors: Kamila Barylska, Anna Gogoli\'nska
- Abstract summary: reversible computations constitute an unconventional form of computing where any sequence of performed operations can be undone by executing in reverse order at any point during a computation.
We have proposed a structural way of translating Reversing Petri Nets (RPNs) to bounded Coloured Petri Nets (CPNs)
Three reversing semantics are possible in RPNs: backtracking (reversing of the lately executed action), causal reversing (action can be reversed only when all its effects have been undone) and out of causal reversing (any previously performed action can be reversed)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reversible computations constitute an unconventional form of computing where
any sequence of performed operations can be undone by executing in reverse
order at any point during a computation. It has been attracting increasing
attention as it provides opportunities for low-power computation, being at the
same time essential or eligible in various applications. In recent work, we
have proposed a structural way of translating Reversing Petri Nets (RPNs) - a
type of Petri nets that embeds reversible computation, to bounded Coloured
Petri Nets (CPNs) - an extension of traditional Petri Nets, where tokens carry
data values. Three reversing semantics are possible in RPNs: backtracking
(reversing of the lately executed action), causal reversing (action can be
reversed only when all its effects have been undone) and out of causal
reversing (any previously performed action can be reversed). In this paper, we
extend the RPN to CPN translation with formal proofs of correctness. Moreover,
the possibility of introduction of cycles to RPNs is discussed. We analyze
which type of cycles could be allowed in RPNs to ensure consistency with the
current semantics. It emerged that the most interesting case related to cycles
in RPNs occurs in causal semantics, where various interpretations of dependency
result in different net's behaviour during reversing. Three definitions of
dependence are presented and discussed.
Related papers
- A Reversible Perspective on Petri Nets and Event Structures [0.0]
Event structures have emerged as a foundational model for concurrent computation.
Event structures have been extended to address reversibility, where processes can undo previous computations.
We introduce a subset of contextual Petri nets, dubbed reversible causal nets, that precisely correspond to reversible prime event structures.
arXiv Detail & Related papers (2023-12-27T20:47:48Z) - Formal Translation from Reversing Petri Nets to Coloured Petri Nets [0.0]
Reversing Petri nets are a recently-proposed extension of Petri nets that implements the three main forms of reversibility, namely, backtracking, causal reversing, and out-of-causal-order reversing.
We have proposed a structural translation from a subclass of RPNs to the model of Coloured Petri Nets (CPNs), an extension of traditional Petri nets where tokens carry data values.
In this paper, we extend the translation to handle RPNs with token multiplicity under the individual-token interpretation, a model which allows multiple tokens of the same type to exist in a system.
arXiv Detail & Related papers (2023-11-01T16:28:38Z) - Tractable Bounding of Counterfactual Queries by Knowledge Compilation [51.47174989680976]
We discuss the problem of bounding partially identifiable queries, such as counterfactuals, in Pearlian structural causal models.
A recently proposed iterated EM scheme yields an inner approximation of those bounds by sampling the initialisation parameters.
We show how a single symbolic knowledge compilation allows us to obtain the circuit structure with symbolic parameters to be replaced by their actual values.
arXiv Detail & Related papers (2023-10-05T07:10:40Z) - Recurrence Boosts Diversity! Revisiting Recurrent Latent Variable in
Transformer-Based Variational AutoEncoder for Diverse Text Generation [85.5379146125199]
Variational Auto-Encoder (VAE) has been widely adopted in text generation.
We propose TRACE, a Transformer-based recurrent VAE structure.
arXiv Detail & Related papers (2022-10-22T10:25:35Z) - A Taxonomy of Recurrent Learning Rules [1.4186974630564675]
Backpropagation through time (BPTT) is the de facto standard for training recurrent neural networks (RNNs)
E-prop was proposed as a causal, local, and efficient practical alternative to these algorithms.
We derive RTRL from BPTT using a detailed notation bringing intuition and clarification to how they are connected.
arXiv Detail & Related papers (2022-07-23T07:03:42Z) - Bayesian Recurrent Units and the Forward-Backward Algorithm [91.39701446828144]
Using Bayes's theorem, we derive a unit-wise recurrence as well as a backward recursion similar to the forward-backward algorithm.
The resulting Bayesian recurrent units can be integrated as recurrent neural networks within deep learning frameworks.
Experiments on speech recognition indicate that adding the derived units at the end of state-of-the-art recurrent architectures can improve the performance at a very low cost in terms of trainable parameters.
arXiv Detail & Related papers (2022-07-21T14:00:52Z) - Approximate Fixed-Points in Recurrent Neural Networks [10.031004070657122]
Recurrent neural networks are widely used in speech and language processing.
Due to dependency on the past, standard algorithms for training these models cannot be efficiently parallelised.
This paper shows that recurrent neural networks can be reformulated as fixed-points of non-linear equation systems.
arXiv Detail & Related papers (2021-06-04T11:33:34Z) - Backpropagation Through Time For Networks With Long-Term Dependencies [0.0]
Backpropagation through time (BPTT) is a technique of updating tuned parameters within recurrent neural networks (RNNs)
We propose using the 'discrete forward sensitivity equation' and a variant of it for single and multiple interacting recurrent loops respectively.
This solution is exact and also allows the network's parameters to vary between each subsequent step, however it does require the computation of a Jacobian.
arXiv Detail & Related papers (2021-03-26T15:55:54Z) - Variance Penalized On-Policy and Off-Policy Actor-Critic [60.06593931848165]
We propose on-policy and off-policy actor-critic algorithms that optimize a performance criterion involving both mean and variance in the return.
Our approach not only performs on par with actor-critic and prior variance-penalization baselines in terms of expected return, but also generates trajectories which have lower variance in the return.
arXiv Detail & Related papers (2021-02-03T10:06:16Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - NP-PROV: Neural Processes with Position-Relevant-Only Variances [113.20013269514327]
We present a new member named Neural Processes with Position-Relevant-Only Variances (NP-PROV)
NP-PROV hypothesizes that a target point close to a context point has small uncertainty, regardless of the function value at that position.
Our evaluation on synthetic and real-world datasets reveals that NP-PROV can achieve state-of-the-art likelihood while retaining a bounded variance.
arXiv Detail & Related papers (2020-06-15T06:11:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.