Bayesian Statistical Model Checking for Multi-agent Systems using
HyperPCTL*
- URL: http://arxiv.org/abs/2209.02672v1
- Date: Tue, 6 Sep 2022 17:36:28 GMT
- Title: Bayesian Statistical Model Checking for Multi-agent Systems using
HyperPCTL*
- Authors: Spandan Das and Pavithra Prabhakar
- Abstract summary: We present a Bayesian method for statistical model checking (SMC) of probabilistic hyperproperties specified in the logic HyperPCTL* on discrete-time Markov chains (DTMCs)
Our algorithm performs better both in terms of the verification time and the number of samples required to deduce satisfiability of a given HyperPCTL* formula.
- Score: 6.574517227976925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a Bayesian method for statistical model checking
(SMC) of probabilistic hyperproperties specified in the logic HyperPCTL* on
discrete-time Markov chains (DTMCs). While SMC of HyperPCTL* using sequential
probability ratio test (SPRT) has been explored before, we develop an
alternative SMC algorithm based on Bayesian hypothesis testing. In comparison
to PCTL*, verifying HyperPCTL* formulae is complex owing to their simultaneous
interpretation on multiple paths of the DTMC. In addition, extending the
bottom-up model-checking algorithm of the non-probabilistic setting is not
straight forward due to the fact that SMC does not return exact answers to the
satisfiability problems of subformulae, instead, it only returns correct
answers with high-confidence. We propose a recursive algorithm for SMC of
HyperPCTL* based on a modified Bayes' test that factors in the uncertainty in
the recursive satisfiability results. We have implemented our algorithm in a
Python toolbox, HyProVer, and compared our approach with the SPRT based SMC.
Our experimental evaluation demonstrates that our Bayesian SMC algorithm
performs better both in terms of the verification time and the number of
samples required to deduce satisfiability of a given HyperPCTL* formula.
Related papers
- Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Bayesian Quantum State Tomography with Python's PyMC [0.0]
We show how to use Python-3's open source PyMC probabilistic programming package to transform an otherwise complicated QST optimization problem into a simple form.
We show how to use Python-3's open source PyMC probabilistic programming package to transform an otherwise complicated QST optimization problem into a simple form.
arXiv Detail & Related papers (2022-12-20T21:16:28Z) - Formal Controller Synthesis for Markov Jump Linear Systems with
Uncertain Dynamics [64.72260320446158]
We propose a method for synthesising controllers for Markov jump linear systems.
Our method is based on a finite-state abstraction that captures both the discrete (mode-jumping) and continuous (stochastic linear) behaviour of the MJLS.
We apply our method to multiple realistic benchmark problems, in particular, a temperature control and an aerial vehicle delivery problem.
arXiv Detail & Related papers (2022-12-01T17:36:30Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - Convex Optimization for Parameter Synthesis in MDPs [19.808494349302784]
Probabilistic model checking aims to prove whether a Markov decision process satisfies a temporal logic specification.
We develop two approaches that iteratively obtain locally optimal runtime solutions.
We demonstrate the approaches on a satellite parameter synthesis problem with hundreds of thousands of parameters and their scalability on a wide range of commonly used benchmarks.
arXiv Detail & Related papers (2021-06-30T21:23:56Z) - Stochastic Gradient MCMC with Multi-Armed Bandit Tuning [2.2559617939136505]
We propose a novel bandit-based algorithm that tunes SGMCMC hyperparameters to maximize the accuracy of the posterior approximation.
We support our results with experiments on both simulated and real datasets, and find that this method is practical for a wide range of application areas.
arXiv Detail & Related papers (2021-05-27T11:00:31Z) - Adaptive Sequential SAA for Solving Two-stage Stochastic Linear Programs [1.6181085766811525]
We present adaptive sequential SAA (sample average approximation) algorithms to solve large-scale two-stage linear programs.
The proposed algorithm can be stopped in finite-time to return a solution endowed with a probabilistic guarantee on quality.
arXiv Detail & Related papers (2020-12-07T14:58:16Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Planning in Markov Decision Processes with Gap-Dependent Sample
Complexity [48.98199700043158]
We propose MDP-GapE, a new trajectory-based Monte-Carlo Tree Search algorithm for planning in a Markov Decision Process.
We prove an upper bound on the number of calls to the generative models needed for MDP-GapE to identify a near-optimal action with high probability.
arXiv Detail & Related papers (2020-06-10T15:05:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.