Proving Non-Inclusion of B\"uchi Automata based on Monte Carlo Sampling
- URL: http://arxiv.org/abs/2007.02282v2
- Date: Tue, 7 Jul 2020 01:26:19 GMT
- Title: Proving Non-Inclusion of B\"uchi Automata based on Monte Carlo Sampling
- Authors: Yong Li, Andrea Turrini, Xuechao Sun, Lijun Zhang
- Abstract summary: We present a specific algorithm for proving B"uchi automata non-inclusion $mathcalL(mathcalA) notsubseteq mathcalL(mathcalB)$.
$mathsfIMC2$ is a fast and reliable way to find counterexamples to B"uchi automata inclusion.
- Score: 19.09848789158933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The search for a proof of correctness and the search for counterexamples
(bugs) are complementary aspects of verification. In order to maximize the
practical use of verification tools it is better to pursue them at the same
time. While this is well-understood in the termination analysis of programs,
this is not the case for the language inclusion analysis of B\"uchi automata,
where research mainly focused on improving algorithms for proving language
inclusion, with the search for counterexamples left to the expensive
complementation operation.
In this paper, we present $\mathsf{IMC}^2$, a specific algorithm for proving
B\"uchi automata non-inclusion $\mathcal{L}(\mathcal{A}) \not\subseteq
\mathcal{L}(\mathcal{B})$, based on Grosu and Smolka's algorithm
$\mathsf{MC}^2$ developed for Monte Carlo model checking against LTL formulas.
The algorithm we propose takes $M = \lceil \ln \delta / \ln (1-\epsilon)
\rceil$ random lasso-shaped samples from $\mathcal{A}$ to decide whether to
reject the hypothesis $\mathcal{L}(\mathcal{A}) \not\subseteq
\mathcal{L}(\mathcal{B})$, for given error probability $\epsilon$ and
confidence level $1 - \delta$. With such a number of samples, $\mathsf{IMC}^2$
ensures that the probability of witnessing $\mathcal{L}(\mathcal{A})
\not\subseteq \mathcal{L}(\mathcal{B})$ via further sampling is less than
$\delta$, under the assumption that the probability of finding a lasso
counterexample is larger than $\epsilon$. Extensive experimental evaluation
shows that $\mathsf{IMC}^2$ is a fast and reliable way to find counterexamples
to B\"uchi automata inclusion.
Related papers
- Simple and Nearly-Optimal Sampling for Rank-1 Tensor Completion via Gauss-Jordan [49.1574468325115]
We revisit the sample and computational complexity of completing a rank-1 tensor in $otimes_i=1N mathbbRd$.
We present a characterization of the problem which admits an algorithm amounting to Gauss-Jordan on a pair of random linear systems.
arXiv Detail & Related papers (2024-08-10T04:26:19Z) - Statistical Query Lower Bounds for Learning Truncated Gaussians [43.452452030671694]
We show that the complexity of any SQ algorithm for this problem is $dmathrmpoly (1/epsilon)$, even when the class $mathcalC$ is simple so that $mathrmpoly(d/epsilon) samples information-theoretically suffice.
arXiv Detail & Related papers (2024-03-04T18:30:33Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Quantum Oblivious LWE Sampling and Insecurity of Standard Model Lattice-Based SNARKs [4.130591018565202]
The Learning Errors With Errors ($mathsfLWE$) problem asks to find $mathbfs$ from an input of the form $(mathbfAmathbfs+mathbfe$)
We do not focus on solving $mathsfLWE$ but on the task of sampling instances.
Our main result is a quantum-time algorithm that samples well-distributed $mathsfLWE$ instances while provably not knowing the solution.
arXiv Detail & Related papers (2024-01-08T10:55:41Z) - Sample-Optimal Locally Private Hypothesis Selection and the Provable
Benefits of Interactivity [8.100854060749212]
We study the problem of hypothesis selection under the constraint of local differential privacy.
We devise an $varepsilon$-locally-differentially-private ($varepsilon$-LDP) algorithm that uses $Thetaleft(fracklog kalpha2min varepsilon2,1 right)$ to guarantee that $d_TV(h,hatf)leq alpha + 9 min_fin mathcalF
arXiv Detail & Related papers (2023-12-09T19:22:10Z) - Optimal Estimator for Linear Regression with Shuffled Labels [17.99906229036223]
This paper considers the task of linear regression with shuffled labels.
$mathbf Y in mathbb Rntimes m, mathbf Pi in mathbb Rntimes p, mathbf B in mathbb Rptimes m$, and $mathbf Win mathbb Rntimes m$, respectively.
arXiv Detail & Related papers (2023-10-02T16:44:47Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Threshold Phenomena in Learning Halfspaces with Massart Noise [56.01192577666607]
We study the problem of PAC learning halfspaces on $mathbbRd$ with Massart noise under Gaussian marginals.
Our results qualitatively characterize the complexity of learning halfspaces in the Massart model.
arXiv Detail & Related papers (2021-08-19T16:16:48Z) - Efficient Statistics for Sparse Graphical Models from Truncated Samples [19.205541380535397]
We focus on two fundamental and classical problems: (i) inference of sparse Gaussian graphical models and (ii) support recovery of sparse linear models.
For sparse linear regression, suppose samples $(bf x,y)$ are generated where $y = bf xtopOmega* + mathcalN(0,1)$ and $(bf x, y)$ is seen only if $y$ belongs to a truncation set $S subseteq mathbbRd$.
arXiv Detail & Related papers (2020-06-17T09:21:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.