Improving Probabilistic Bisimulation for MDPs Using Machine Learning
- URL: http://arxiv.org/abs/2308.02519v1
- Date: Sun, 30 Jul 2023 12:58:12 GMT
- Title: Improving Probabilistic Bisimulation for MDPs Using Machine Learning
- Authors: Mohammadsadegh Mohaghegh, Khayyam Salehi
- Abstract summary: We propose a new technique to partition the state space of a given model to its probabilistic bisimulation classes.
The approach can decrease significantly the running time compared to state-of-the-art tools.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The utilization of model checking has been suggested as a formal verification
technique for analyzing critical systems. However, the primary challenge in
applying to complex systems is state space explosion problem. To address this
issue, bisimulation minimization has emerged as a prominent method for reducing
the number of states in a labeled transition system, aiming to overcome the
difficulties associated with the state space explosion problem. In the case of
systems exhibiting stochastic behaviors, probabilistic bisimulation is employed
to minimize a given model, obtaining its equivalent form with fewer states.
Recently, various techniques have been introduced to decrease the time
complexity of the iterative methods used to compute probabilistic bisimulation
for stochastic systems that display nondeterministic behaviors. In this paper,
we propose a new technique to partition the state space of a given
probabilistic model to its bisimulation classes. This technique uses the PRISM
program of a given model and constructs some small versions of the model to
train a classifier. It then applies machine learning classification techniques
to approximate the related partition. The resulting partition is used as an
initial one for the standard bisimulation technique in order to reduce the
running time of the method. The experimental results show that the approach can
decrease significantly the running time compared to state-of-the-art tools.
Related papers
- Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - Splitter Orderings for Probabilistic Bisimulation [0.0]
We propose techniques to accelerate iterative processes to partition state space of a given probabilistic model to its bisimulation classes.
The proposed approaches are implemented and run on several conventional case studies and reduce the running time by one order of magnitude on average.
arXiv Detail & Related papers (2023-07-17T16:30:19Z) - Hierarchical-Hyperplane Kernels for Actively Learning Gaussian Process
Models of Nonstationary Systems [5.1672267755831705]
We present a kernel family that incorporates a partitioning that is learnable via gradient-based methods.
We empirically demonstrate excellent performance on various active learning tasks.
arXiv Detail & Related papers (2023-03-17T14:50:51Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Learning non-stationary and discontinuous functions using clustering,
classification and Gaussian process modelling [0.0]
We propose a three-stage approach for the approximation of non-smooth functions.
The idea is to split the space following the localized behaviors or regimes of the system and build local surrogates.
The approach is tested and validated on two analytical functions and a finite element model of a tensile membrane structure.
arXiv Detail & Related papers (2022-11-30T11:11:56Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Learning neural state-space models: do we need a state estimator? [0.0]
We provide insights for calibration of neural state-space training algorithms based on extensive experimentation and analyses.
Specific focus is given to the choice and the role of the initial state estimation.
We demonstrate that advanced initial state estimation techniques are really required to achieve high performance on certain classes of dynamical systems.
arXiv Detail & Related papers (2022-06-26T17:15:35Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Low-Rank Constraints for Fast Inference in Structured Models [110.38427965904266]
This work demonstrates a simple approach to reduce the computational and memory complexity of a large class of structured models.
Experiments with neural parameterized structured models for language modeling, polyphonic music modeling, unsupervised grammar induction, and video modeling show that our approach matches the accuracy of standard models at large state spaces.
arXiv Detail & Related papers (2022-01-08T00:47:50Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.