Vehicle: Interfacing Neural Network Verifiers with Interactive Theorem
Provers
- URL: http://arxiv.org/abs/2202.05207v1
- Date: Thu, 10 Feb 2022 18:09:23 GMT
- Title: Vehicle: Interfacing Neural Network Verifiers with Interactive Theorem
Provers
- Authors: Matthew L. Daggitt, Wen Kokke, Robert Atkey, Luca Arnaboldi, Ekaterina
Komendantskya
- Abstract summary: Vehicle is equipped with an expressive domain specific language for stating neural network specifications.
It overcomes previous issues with maintainability and scalability in similar ITP formalisations.
We demonstrate its utility by using it to connect the neural network verifier Marabou to Agda and then formally verifying that a car steered by a neural network never leaves the road.
- Score: 1.5749416770494706
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Verification of neural networks is currently a hot topic in automated theorem
proving. Progress has been rapid and there are now a wide range of tools
available that can verify properties of networks with hundreds of thousands of
nodes. In theory this opens the door to the verification of larger control
systems that make use of neural network components. However, although work has
managed to incorporate the results of these verifiers to prove larger
properties of individual systems, there is currently no general methodology for
bridging the gap between verifiers and interactive theorem provers (ITPs).
In this paper we present Vehicle, our solution to this problem. Vehicle is
equipped with an expressive domain specific language for stating neural network
specifications which can be compiled to both verifiers and ITPs. It overcomes
previous issues with maintainability and scalability in similar ITP
formalisations by using a standard ONNX file as the single canonical
representation of the network. We demonstrate its utility by using it to
connect the neural network verifier Marabou to Agda and then formally verifying
that a car steered by a neural network never leaves the road, even in the face
of an unpredictable cross wind and imperfect sensors. The network has over
20,000 nodes, and therefore this proof represents an improvement of 3 orders of
magnitude over prior proofs about neural network enhanced systems in ITPs.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Fully Automatic Neural Network Reduction for Formal Verification [8.017543518311196]
We introduce a fully automatic and sound reduction of neural networks using reachability analysis.
The soundness ensures that the verification of the reduced network entails the verification of the original network.
We show that our approach can reduce the number of neurons to a fraction of the original number of neurons with minor outer-approximation.
arXiv Detail & Related papers (2023-05-03T07:13:47Z) - CheckINN: Wide Range Neural Network Verification in Imandra [0.0]
We show how Imandra, a functional programming language and a theorem prover can offer a holistic infrastructure for neural network verification.
We develop a novel library CheckINN that formalises neural networks in Imandra, and covers different important facets of neural network verification.
arXiv Detail & Related papers (2022-07-21T16:06:58Z) - Consistency of Neural Networks with Regularization [0.0]
This paper proposes the general framework of neural networks with regularization and prove its consistency.
Two types of activation functions: hyperbolic function(Tanh) and rectified linear unit(ReLU) have been taken into consideration.
arXiv Detail & Related papers (2022-06-22T23:33:39Z) - Neuro-Symbolic Verification of Deep Neural Networks [20.973078277539276]
We introduce a novel framework for verifying neural networks, named neuro-symbolic verification.
The key idea is to use neural networks as part of the otherwise logical specification.
We show how neuro-symbolic verification can be implemented on top of existing verification infrastructure for neural networks.
arXiv Detail & Related papers (2022-03-02T08:40:01Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - Scalable Verification of Quantized Neural Networks (Technical Report) [14.04927063847749]
We show that bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard.
We propose three techniques for making SMT-based verification of quantized neural networks more scalable.
arXiv Detail & Related papers (2020-12-15T10:05:37Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - Firearm Detection and Segmentation Using an Ensemble of Semantic Neural
Networks [62.997667081978825]
We present a weapon detection system based on an ensemble of semantic Convolutional Neural Networks.
A set of simpler neural networks dedicated to specific tasks requires less computational resources and can be trained in parallel.
The overall output of the system given by the aggregation of the outputs of individual networks can be tuned by a user to trade-off false positives and false negatives.
arXiv Detail & Related papers (2020-02-11T13:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.