Splitter Orderings for Probabilistic Bisimulation
- URL: http://arxiv.org/abs/2307.08614v1
- Date: Mon, 17 Jul 2023 16:30:19 GMT
- Title: Splitter Orderings for Probabilistic Bisimulation
- Authors: Mohammadsadegh Mohagheghi, Khayyam Salehi
- Abstract summary: We propose techniques to accelerate iterative processes to partition state space of a given probabilistic model to its bisimulation classes.
The proposed approaches are implemented and run on several conventional case studies and reduce the running time by one order of magnitude on average.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model checking has been proposed as a formal verification approach for
analyzing computer-based and cyber-physical systems. The state space explosion
problem is the main obstacle for applying this approach for sophisticated
systems. Bisimulation minimization is a prominent method for reducing the
number of states in a labeled transition system and is used to alleviate the
challenges of the state space explosion problem. For systems with stochastic
behaviors, probabilistic bisimulation is used to reduce a given model to its
minimized equivalent one. In recent years, several techniques have been
proposed to reduce the time complexity of the iterative methods for computing
probabilistic bisimulation of stochastic systems with nondeterministic
behaviors. In this paper, we propose several techniques to accelerate iterative
processes to partition the state space of a given probabilistic model to its
bisimulation classes. The first technique applies two ordering heuristics for
choosing splitter blocks. The second technique uses hash tables to reduce the
running time and the average time complexity of the standard iterative method.
The proposed approaches are implemented and run on several conventional case
studies and reduce the running time by one order of magnitude on average.
Related papers
- Improving Probabilistic Bisimulation for MDPs Using Machine Learning [0.0]
We propose a new technique to partition the state space of a given model to its probabilistic bisimulation classes.
The approach can decrease significantly the running time compared to state-of-the-art tools.
arXiv Detail & Related papers (2023-07-30T12:58:12Z) - Numerical Methods for Convex Multistage Stochastic Optimization [86.45244607927732]
We focus on optimisation programming (SP), Optimal Control (SOC) and Decision Processes (MDP)
Recent progress in solving convex multistage Markov problems is based on cutting planes approximations of the cost-to-go functions of dynamic programming equations.
Cutting plane type methods can handle multistage problems with a large number of stages, but a relatively smaller number of state (decision) variables.
arXiv Detail & Related papers (2023-03-28T01:30:40Z) - Entropic Neural Optimal Transport via Diffusion Processes [105.34822201378763]
We propose a novel neural algorithm for the fundamental problem of computing the entropic optimal transport (EOT) plan between continuous probability distributions.
Our algorithm is based on the saddle point reformulation of the dynamic version of EOT which is known as the Schr"odinger Bridge problem.
In contrast to the prior methods for large-scale EOT, our algorithm is end-to-end and consists of a single learning step.
arXiv Detail & Related papers (2022-11-02T14:35:13Z) - Learning neural state-space models: do we need a state estimator? [0.0]
We provide insights for calibration of neural state-space training algorithms based on extensive experimentation and analyses.
Specific focus is given to the choice and the role of the initial state estimation.
We demonstrate that advanced initial state estimation techniques are really required to achieve high performance on certain classes of dynamical systems.
arXiv Detail & Related papers (2022-06-26T17:15:35Z) - Second-order flows for computing the ground states of rotating
Bose-Einstein condensates [5.252966797394752]
Some artificial evolutionary differential equations involving second-order time derivatives are considered to be first-order.
The proposed artificial dynamics are novel second-order hyperbolic partial differential equations with dissipation.
New algorithms are superior to the state-of-the-art numerical methods based on the gradient flow.
arXiv Detail & Related papers (2022-05-02T10:45:49Z) - Data-driven Prediction of Relevant Scenarios for Robust Optimization [0.0]
We study robust one- and two-stage problems with discrete uncertainty sets.
We propose a data-driven computation to seed the iterative solution method with a set of starting scenarios.
Our experiments show that predicting even a small number of good start scenarios by our method can considerably reduce the time of the iterative methods.
arXiv Detail & Related papers (2022-03-30T19:52:29Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - COSMIC: fast closed-form identification from large-scale data for LTV
systems [4.10464681051471]
We introduce a closed-form method for identification of discrete-time linear timevariant systems from data.
We develop an algorithm with guarantees of optimality and with a complexity that increases linearly with the number of instants considered per trajectory.
Our algorithm was applied to both a Low Fidelity and Functional Engineering Simulators for the Comet Interceptor mission.
arXiv Detail & Related papers (2021-12-08T16:07:59Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.