Neural Model Checking
- URL: http://arxiv.org/abs/2410.23790v1
- Date: Thu, 31 Oct 2024 10:17:04 GMT
- Title: Neural Model Checking
- Authors: Mirco Giacobbe, Daniel Kroening, Abhinandan Pal, Michael Tautschnig,
- Abstract summary: We introduce a machine learning approach to model checking temporal logic, with application to formal hardware verification.
Our new approach combines machine learning and symbolic reasoning by using neural networks as formal proof certificates for linear temporal logic.
We experimentally demonstrate that our method outperforms the state-of-the-art academic and commercial model checkers on a set of standard hardware designs written in SystemVerilog.
- Score: 10.376573742987917
- License:
- Abstract: We introduce a machine learning approach to model checking temporal logic, with application to formal hardware verification. Model checking answers the question of whether every execution of a given system satisfies a desired temporal logic specification. Unlike testing, model checking provides formal guarantees. Its application is expected standard in silicon design and the EDA industry has invested decades into the development of performant symbolic model checking algorithms. Our new approach combines machine learning and symbolic reasoning by using neural networks as formal proof certificates for linear temporal logic. We train our neural certificates from randomly generated executions of the system and we then symbolically check their validity using satisfiability solving which, upon the affirmative answer, establishes that the system provably satisfies the specification. We leverage the expressive power of neural networks to represent proof certificates as well as the fact that checking a certificate is much simpler than finding one. As a result, our machine learning procedure for model checking is entirely unsupervised, formally sound, and practically effective. We experimentally demonstrate that our method outperforms the state-of-the-art academic and commercial model checkers on a set of standard hardware designs written in SystemVerilog.
Related papers
- Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - Requirement falsification for cyber-physical systems using generative
models [1.90365714903665]
OGAN can find inputs that are counterexamples for the safety of a system revealing design, software, or hardware defects before the system is taken into operation.
OGAN executes tests atomically and does not require any previous model of the system under test.
OGAN can be applied to new systems with little effort, has few requirements for the system under test, and exhibits state-of-the-art CPS falsification efficiency and effectiveness.
arXiv Detail & Related papers (2023-10-31T14:32:54Z) - Representing Timed Automata and Timing Anomalies of Cyber-Physical
Production Systems in Knowledge Graphs [51.98400002538092]
This paper aims to improve model-based anomaly detection in CPPS by combining the learned timed automaton with a formal knowledge graph about the system.
Both the model and the detected anomalies are described in the knowledge graph in order to allow operators an easier interpretation of the model and the detected anomalies.
arXiv Detail & Related papers (2023-08-25T15:25:57Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Stateless and Rule-Based Verification For Compliance Checking
Applications [1.7403133838762452]
We present a formal logic-based framework for creating intelligent compliance checking systems.
SARV is a verification framework designed to simplify the overall process of verification for stateless and rule-based verification problems.
Based on 300 data experiments, the SARV-based compliance solution outperforms machine learning methods on a 3125-records software quality dataset.
arXiv Detail & Related papers (2022-04-14T17:31:33Z) - Active Learning of Markov Decision Processes using Baum-Welch algorithm
(Extended) [0.0]
This paper revisits and adapts the classic Baum-Welch algorithm for learning Markov decision processes and Markov chains.
We empirically compare our approach with state-of-the-art tools and demonstrate that the proposed active learning procedure can significantly reduce the number of observations required to obtain accurate models.
arXiv Detail & Related papers (2021-10-06T18:54:19Z) - A Novel Anomaly Detection Algorithm for Hybrid Production Systems based
on Deep Learning and Timed Automata [73.38551379469533]
DAD:DeepAnomalyDetection is a new approach for automatic model learning and anomaly detection in hybrid production systems.
It combines deep learning and timed automata for creating behavioral model from observations.
The algorithm has been applied to few data sets including two from real systems and has shown promising results.
arXiv Detail & Related papers (2020-10-29T08:27:43Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Verification of ML Systems via Reparameterization [6.482926592121413]
We show how a probabilistic program can be automatically represented in a theorem prover.
We also prove that the null model used in a Bayesian hypothesis test satisfies a fairness criterion called demographic parity.
arXiv Detail & Related papers (2020-07-14T02:19:25Z) - High-level Modeling of Manufacturing Faults in Deep Neural Network
Accelerators [2.6258269516366557]
Google's Unit Processing (TPU) is a neural network accelerator that uses systolic array-based matrix multiplication hardware for computation in its crux.
Manufacturing faults at any state element of the matrix multiplication unit can cause unexpected errors in these inference networks.
We propose a formal model of permanent faults and their propagation in a TPU using the Discrete-Time Markov Chain (DTMC) formalism.
arXiv Detail & Related papers (2020-06-05T18:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.