CheckINN: Wide Range Neural Network Verification in Imandra
- URL: http://arxiv.org/abs/2207.10562v1
- Date: Thu, 21 Jul 2022 16:06:58 GMT
- Title: CheckINN: Wide Range Neural Network Verification in Imandra
- Authors: Remi Desmartin, Grant Passmore, Ekaterina Komendantskaya, Matthew
Daggitt
- Abstract summary: We show how Imandra, a functional programming language and a theorem prover can offer a holistic infrastructure for neural network verification.
We develop a novel library CheckINN that formalises neural networks in Imandra, and covers different important facets of neural network verification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are increasingly relied upon as components of complex
safety-critical systems such as autonomous vehicles. There is high demand for
tools and methods that embed neural network verification in a larger
verification cycle. However, neural network verification is difficult due to a
wide range of verification properties of interest, each typically only amenable
to verification in specialised solvers. In this paper, we show how Imandra, a
functional programming language and a theorem prover originally designed for
verification, validation and simulation of financial infrastructure can offer a
holistic infrastructure for neural network verification. We develop a novel
library CheckINN that formalises neural networks in Imandra, and covers
different important facets of neural network verification.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Fully Automatic Neural Network Reduction for Formal Verification [8.017543518311196]
We introduce a fully automatic and sound reduction of neural networks using reachability analysis.
The soundness ensures that the verification of the reduced network entails the verification of the original network.
We show that our approach can reduce the number of neurons to a fraction of the original number of neurons with minor outer-approximation.
arXiv Detail & Related papers (2023-05-03T07:13:47Z) - Neuro-Symbolic Verification of Deep Neural Networks [20.973078277539276]
We introduce a novel framework for verifying neural networks, named neuro-symbolic verification.
The key idea is to use neural networks as part of the otherwise logical specification.
We show how neuro-symbolic verification can be implemented on top of existing verification infrastructure for neural networks.
arXiv Detail & Related papers (2022-03-02T08:40:01Z) - Vehicle: Interfacing Neural Network Verifiers with Interactive Theorem
Provers [1.5749416770494706]
Vehicle is equipped with an expressive domain specific language for stating neural network specifications.
It overcomes previous issues with maintainability and scalability in similar ITP formalisations.
We demonstrate its utility by using it to connect the neural network verifier Marabou to Agda and then formally verifying that a car steered by a neural network never leaves the road.
arXiv Detail & Related papers (2022-02-10T18:09:23Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Towards Repairing Neural Networks Correctly [6.600380575920419]
We propose a runtime verification method to ensure the correctness of neural networks.
Experiment results show that our approach effectively generates neural networks which are guaranteed to satisfy the properties.
arXiv Detail & Related papers (2020-12-03T12:31:07Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Verifying Recurrent Neural Networks using Invariant Inference [0.0]
We propose a novel approach for verifying properties of a widespread variant of neural networks, called recurrent neural networks.
Our approach is based on the inference of invariants, which allow us to reduce the complex problem of verifying recurrent networks into simpler, non-recurrent problems.
arXiv Detail & Related papers (2020-04-06T08:08:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.