Augmenting Interpolation-Based Model Checking with Auxiliary Invariants (Extended Version)
- URL: http://arxiv.org/abs/2403.07821v2
- Date: Fri, 25 Oct 2024 14:10:42 GMT
- Title: Augmenting Interpolation-Based Model Checking with Auxiliary Invariants (Extended Version)
- Authors: Dirk Beyer, Po-Chun Chien, Nian-Ze Lee,
- Abstract summary: We propose an augmented-based verification algorithm that injects external invariants into model checking.
We found that injecting invariants reduces the number of queries needed to prove safety properties and improves the run-time efficiency.
- Score: 4.309079367183251
- License:
- Abstract: Software model checking is a challenging problem, and generating relevant invariants is a key factor in proving the safety properties of a program. Program invariants can be obtained by various approaches, including lightweight procedures based on data-flow analysis and intensive techniques using Craig interpolation. Although data-flow analysis runs efficiently, it often produces invariants that are too weak to prove the properties. By contrast, interpolation-based approaches build strong invariants from interpolants, but they might not scale well due to expensive interpolation procedures. Invariants can also be injected into model-checking algorithms to assist the analysis. Invariant injection has been studied for many well-known approaches, including k-induction, predicate abstraction, and symbolic execution. We propose an augmented interpolation-based verification algorithm that injects external invariants into interpolation-based model checking (McMillan, 2003), a hardware model-checking algorithm recently adopted for software verification. The auxiliary invariants help prune unreachable states in Craig interpolants and confine the analysis to the reachable parts of a program. We implemented the proposed technique in the verification framework CPAchecker and evaluated it against mature SMT-based methods in CPAchecker as well as other state-of-the-art software verifiers. We found that injecting invariants reduces the number of interpolation queries needed to prove safety properties and improves the run-time efficiency. Consequently, the proposed invariant-injection approach verified difficult tasks that none of its plain version (i.e., without invariants), the invariant generator, or any compared tools could solve.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Learning Non-Linear Invariants for Unsupervised Out-of-Distribution Detection [5.019613806273252]
We propose a framework consisting of a normalizing flow-like architecture capable of learning non-linear invariants.
Our approach achieves state-of-the-art results on an extensive U-OOD benchmark.
arXiv Detail & Related papers (2024-07-04T16:01:21Z) - Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models [102.72940700598055]
In reasoning tasks, even a minor error can cascade into inaccurate results.
We develop a method that avoids introducing external resources, relying instead on perturbations to the input.
Our training approach randomly masks certain tokens within the chain of thought, a technique we found to be particularly effective for reasoning tasks.
arXiv Detail & Related papers (2024-03-04T16:21:54Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Online Arbitrary Shaped Clustering through Correlated Gaussian Functions [0.0]
A novel online clustering algorithm is presented that can produce arbitrary shaped clusters from inputs in an unsupervised manner.
The algorithm can be deemed more biologically plausible than model optimization through backpropagation, although practical applicability may require additional research.
arXiv Detail & Related papers (2023-02-13T13:12:55Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Interpolation and SAT-Based Model Checking Revisited: Adoption to
Software Verification [3.9403985159394144]
A formalverification algorithm was originally devised to verify safety properties of finite-state transition systems.
Although 20 years old, the algorithm is still state-of-the-art in hardware model checking.
Our contribution is to close this significant, two decades old gap in knowledge by adopting the algorithm to software verification.
arXiv Detail & Related papers (2022-08-09T21:33:19Z) - Data-driven Numerical Invariant Synthesis with Automatic Generation of
Attributes [0.0]
We propose a data-driven algorithm for numerical invariant synthesis and verification.
The algorithm is based on the ICE-DT schema for learning decision trees from samples of positive and negative states.
arXiv Detail & Related papers (2022-05-30T09:18:37Z) - Incremental Verification of Fixed-Point Implementations of Neural
Networks [0.19573380763700707]
We develop and evaluate a novel symbolic verification framework using incremental bounded model checking (BMC), satisfiability modulo theories (SMT), and invariant inference.
Our approach was able to verify and produce examples for 85.8% of 21 test cases considering different input images, and 100% of the properties related to covering methods.
arXiv Detail & Related papers (2020-12-21T10:03:44Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - Joint Contrastive Learning with Infinite Possibilities [114.45811348666898]
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling.
We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL)
arXiv Detail & Related papers (2020-09-30T16:24:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.