NeuroCodeBench: a plain C neural network benchmark for software
verification
- URL: http://arxiv.org/abs/2309.03617v1
- Date: Thu, 7 Sep 2023 10:19:33 GMT
- Title: NeuroCodeBench: a plain C neural network benchmark for software
verification
- Authors: Edoardo Manino, Rafael S\'a Menezes, Fedor Shmarov, Lucas C. Cordeiro
- Abstract summary: This paper presents NeuroCodeBench - a verification benchmark for neural network code written in plain C.
It contains 32 neural networks with 607 safety properties divided into 6 categories: maths library, activation functions, error-correcting networks, transfer function approximation, probability density estimation and reinforcement learning.
- Score: 4.9975496263385875
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safety-critical systems with neural network components require strong
guarantees. While existing neural network verification techniques have shown
great progress towards this goal, they cannot prove the absence of software
faults in the network implementation. This paper presents NeuroCodeBench - a
verification benchmark for neural network code written in plain C. It contains
32 neural networks with 607 safety properties divided into 6 categories: maths
library, activation functions, error-correcting networks, transfer function
approximation, probability density estimation and reinforcement learning. Our
preliminary evaluation shows that state-of-the-art software verifiers struggle
to provide correct verdicts, due to their incomplete support of the standard C
mathematical library and the complexity of larger neural networks.
Related papers
- Partial Trace-Class Bayesian Neural Networks [0.0]
We propose three different innovative architectures of partial trace-class Bayesian neural networks (PaTraC BNNs)<n>These PaTraC BNNs have computational and statistical advantages over standard Bayesian neural networks in terms of speed and memory requirements.<n>In a numerical simulation study, we verify the claimed benefits, and further illustrate the performance of our proposed methodology on a real-world dataset.
arXiv Detail & Related papers (2025-11-03T14:38:35Z) - Floating-Point Neural Network Verification at the Software Level [2.228696309685013]
The behaviour of neural network components must be proven correct before deployment in safety-critical systems.<n>Existing neural network verification techniques cannot certify the absence of faults at the software level.<n>We show how to specify and verify that neural networks are safe, by explicitly reasoning about their floating-point implementation.
arXiv Detail & Related papers (2025-10-27T14:43:19Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Set-Based Training for Neural Network Verification [8.97708612393722]
Small input perturbations can significantly affect the outputs of a neural network.
In safety-critical environments, the inputs often contain noisy sensor data.
We employ an end-to-end set-based training procedure that trains robust neural networks for formal verification.
arXiv Detail & Related papers (2024-01-26T15:52:41Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - CheckINN: Wide Range Neural Network Verification in Imandra [0.0]
We show how Imandra, a functional programming language and a theorem prover can offer a holistic infrastructure for neural network verification.
We develop a novel library CheckINN that formalises neural networks in Imandra, and covers different important facets of neural network verification.
arXiv Detail & Related papers (2022-07-21T16:06:58Z) - Neural Network Quantization for Efficient Inference: A Survey [0.0]
Neural network quantization has recently arisen to meet this demand of reducing the size and complexity of neural networks.
This paper surveys the many neural network quantization techniques that have been developed in the last decade.
arXiv Detail & Related papers (2021-12-08T22:49:39Z) - The mathematics of adversarial attacks in AI -- Why deep learning is
unstable despite the existence of stable neural networks [69.33657875725747]
We prove that any training procedure based on training neural networks for classification problems with a fixed architecture will yield neural networks that are either inaccurate or unstable (if accurate)
The key is that the stable and accurate neural networks must have variable dimensions depending on the input, in particular, variable dimensions is a necessary condition for stability.
Our result points towards the paradox that accurate and stable neural networks exist, however, modern algorithms do not compute them.
arXiv Detail & Related papers (2021-09-13T16:19:25Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Scalable Verification of Quantized Neural Networks (Technical Report) [14.04927063847749]
We show that bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard.
We propose three techniques for making SMT-based verification of quantized neural networks more scalable.
arXiv Detail & Related papers (2020-12-15T10:05:37Z) - Towards Repairing Neural Networks Correctly [6.600380575920419]
We propose a runtime verification method to ensure the correctness of neural networks.
Experiment results show that our approach effectively generates neural networks which are guaranteed to satisfy the properties.
arXiv Detail & Related papers (2020-12-03T12:31:07Z) - FAT: Training Neural Networks for Reliable Inference Under Hardware
Faults [3.191587417198382]
We present a novel methodology called fault-aware training (FAT), which includes error modeling during neural network (NN) training, to make QNNs resilient to specific fault models on the device.
FAT has been validated for numerous classification tasks including CIFAR10, GTSRB, SVHN and ImageNet.
arXiv Detail & Related papers (2020-11-11T16:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.