Uncertainty Reasoning for Probabilistic Petri Nets via Bayesian Networks
- URL: http://arxiv.org/abs/2009.14817v1
- Date: Wed, 30 Sep 2020 17:40:54 GMT
- Title: Uncertainty Reasoning for Probabilistic Petri Nets via Bayesian Networks
- Authors: Rebecca Bernemann and Benjamin Cabrera and Reiko Heckel and Barbara
K\"onig
- Abstract summary: We exploit extended Bayesian networks for uncertainty reasoning on Petri nets.
In particular, Bayesian networks are used as symbolic representations of probability distributions.
We show how to derive information from a modular Bayesian net.
- Score: 1.471992435706872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper exploits extended Bayesian networks for uncertainty reasoning on
Petri nets, where firing of transitions is probabilistic. In particular,
Bayesian networks are used as symbolic representations of probability
distributions, modelling the observer's knowledge about the tokens in the net.
The observer can study the net by monitoring successful and failed steps.
An update mechanism for Bayesian nets is enabled by relaxing some of their
restrictions, leading to modular Bayesian nets that can conveniently be
represented and modified. As for every symbolic representation, the question is
how to derive information - in this case marginal probability distributions -
from a modular Bayesian net. We show how to do this by generalizing the known
method of variable elimination.
The approach is illustrated by examples about the spreading of diseases (SIR
model) and information diffusion in social networks. We have implemented our
approach and provide runtime results.
Related papers
- Predicting Cascading Failures with a Hyperparametric Diffusion Model [66.89499978864741]
We study cascading failures in power grids through the lens of diffusion models.
Our model integrates viral diffusion principles with physics-based concepts.
We show that this diffusion model can be learned from traces of cascading failures.
arXiv Detail & Related papers (2024-06-12T02:34:24Z) - A Note on Bayesian Networks with Latent Root Variables [56.86503578982023]
We show that the marginal distribution over the remaining, manifest, variables also factorises as a Bayesian network, which we call empirical.
A dataset of observations of the manifest variables allows us to quantify the parameters of the empirical Bayesian net.
arXiv Detail & Related papers (2024-02-26T23:53:34Z) - Gaussian Mixture Models for Affordance Learning using Bayesian Networks [50.18477618198277]
Affordances are fundamental descriptors of relationships between actions, objects and effects.
This paper approaches the problem of an embodied agent exploring the world and learning these affordances autonomously from its sensory experiences.
arXiv Detail & Related papers (2024-02-08T22:05:45Z) - Probabilistic Verification of ReLU Neural Networks via Characteristic
Functions [11.489187712465325]
We use ideas from probability theory in the frequency domain to provide probabilistic verification guarantees for ReLU neural networks.
We interpret a (deep) feedforward neural network as a discrete dynamical system over a finite horizon.
We obtain the corresponding cumulative distribution function of the output set, which can be used to check if the network is performing as expected.
arXiv Detail & Related papers (2022-12-03T05:53:57Z) - Reconsidering Dependency Networks from an Information Geometry
Perspective [2.6778110563115542]
Dependency networks are potential probabilistic graphical models for systems comprising a large number of variables.
The structure of a dependency network is represented by a directed graph, and each node has a conditional probability table.
We show that the dependency network and the Bayesian network have roughly the same performance in terms of the accuracy of their learned distributions.
arXiv Detail & Related papers (2021-07-02T07:05:11Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - Bayesian Inference by Symbolic Model Checking [0.0]
We present a simple translation from Bayesian networks into tree-like Markov chains.
We show that symbolic data structures such as multi-terminal BDDs (MTBDDs) are very effective to perform inference.
arXiv Detail & Related papers (2020-07-29T19:38:17Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.