Towards Repairing Neural Networks Correctly
- URL: http://arxiv.org/abs/2012.01872v2
- Date: Thu, 6 May 2021 02:52:49 GMT
- Title: Towards Repairing Neural Networks Correctly
- Authors: Guoliang Dong, Jun Sun, Jingyi Wang, Xinyu Wang, Ting Dai
- Abstract summary: We propose a runtime verification method to ensure the correctness of neural networks.
Experiment results show that our approach effectively generates neural networks which are guaranteed to satisfy the properties.
- Score: 6.600380575920419
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are increasingly applied to support decision making in
safety-critical applications (like autonomous cars, unmanned aerial vehicles
and face recognition based authentication). While many impressive static
verification techniques have been proposed to tackle the correctness problem of
neural networks, it is possible that static verification may never be
sufficiently scalable to handle real-world neural networks. In this work, we
propose a runtime verification method to ensure the correctness of neural
networks. Given a neural network and a desirable safety property, we adopt
state-of-the-art static verification techniques to identify strategically
locations to introduce additional gates which "correct" neural network
behaviors at runtime. Experiment results show that our approach effectively
generates neural networks which are guaranteed to satisfy the properties,
whilst being consistent with the original neural network most of the time.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Set-Based Training for Neural Network Verification [8.97708612393722]
Small input perturbations can significantly affect the outputs of a neural network.
In safety-critical environments, the inputs often contain noisy sensor data.
We employ an end-to-end set-based training procedure that trains robust neural networks for formal verification.
arXiv Detail & Related papers (2024-01-26T15:52:41Z) - Fully Automatic Neural Network Reduction for Formal Verification [8.017543518311196]
We introduce a fully automatic and sound reduction of neural networks using reachability analysis.
The soundness ensures that the verification of the reduced network entails the verification of the original network.
We show that our approach can reduce the number of neurons to a fraction of the original number of neurons with minor outer-approximation.
arXiv Detail & Related papers (2023-05-03T07:13:47Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Generate and Verify: Semantically Meaningful Formal Analysis of Neural
Network Perception Systems [2.2559617939136505]
Testing remains to evaluate accuracy of neural network perception systems.
We employ neural network verification to prove that a model will always produce estimates within some error bound to the ground truth.
arXiv Detail & Related papers (2020-12-16T23:09:53Z) - An SMT-Based Approach for Verifying Binarized Neural Networks [1.4394939014120451]
We propose an SMT-based technique for verifying Binarized Neural Networks.
One novelty of our technique is that it allows the verification of neural networks that include both binarized and non-binarized components.
We implement our technique as an extension to the Marabou framework, and use it to evaluate the approach on popular binarized neural network architectures.
arXiv Detail & Related papers (2020-11-05T16:21:26Z) - Verifying Recurrent Neural Networks using Invariant Inference [0.0]
We propose a novel approach for verifying properties of a widespread variant of neural networks, called recurrent neural networks.
Our approach is based on the inference of invariants, which allow us to reduce the complex problem of verifying recurrent networks into simpler, non-recurrent problems.
arXiv Detail & Related papers (2020-04-06T08:08:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.