Floating-Point Neural Network Verification at the Software Level
- URL: http://arxiv.org/abs/2510.23389v1
- Date: Mon, 27 Oct 2025 14:43:19 GMT
- Title: Floating-Point Neural Network Verification at the Software Level
- Authors: Edoardo Manino, Bruno Farias, Rafael Sá Menezes, Fedor Shmarov, Lucas C. Cordeiro,
- Abstract summary: The behaviour of neural network components must be proven correct before deployment in safety-critical systems.<n>Existing neural network verification techniques cannot certify the absence of faults at the software level.<n>We show how to specify and verify that neural networks are safe, by explicitly reasoning about their floating-point implementation.
- Score: 2.228696309685013
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The behaviour of neural network components must be proven correct before deployment in safety-critical systems. Unfortunately, existing neural network verification techniques cannot certify the absence of faults at the software level. In this paper, we show how to specify and verify that neural networks are safe, by explicitly reasoning about their floating-point implementation. In doing so, we construct NeuroCodeBench 2.0, a benchmark comprising 912 neural network verification examples that cover activation functions, common layers, and full neural networks of up to 170K parameters. Our verification suite is written in plain C and is compatible with the format of the International Competition on Software Verification (SV-COMP). Thanks to it, we can conduct the first rigorous evaluation of eight state-of-the-art software verifiers on neural network code. The results show that existing automated verification tools can correctly solve an average of 11% of our benchmark, while producing around 3% incorrect verdicts. At the same time, a historical analysis reveals that the release of our benchmark has already had a significantly positive impact on the latter.
Related papers
- Verifying rich robustness properties for neural networks [4.642283627168068]
Robustness is a problem in AI alignment and safety, with models such as neural networks being increasingly used in safety-critical systems.<n>We propose a framework to specify and verify variants of robustness in neural network verification.<n>We develop a novel and powerful unified technique to verify all such variants in a homogeneous way.
arXiv Detail & Related papers (2025-11-10T16:43:02Z) - Exact Certification of (Graph) Neural Networks Against Label Poisoning [50.87615167799367]
We introduce an exact certification method for label flipping in Graph Neural Networks (GNNs)<n>We apply our method to certify a broad range of GNN architectures in node classification tasks.<n>Our work presents the first exact certificate to a poisoning attack ever derived for neural networks.
arXiv Detail & Related papers (2024-11-30T17:05:12Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - NeuroCodeBench: a plain C neural network benchmark for software
verification [4.9975496263385875]
This paper presents NeuroCodeBench - a verification benchmark for neural network code written in plain C.
It contains 32 neural networks with 607 safety properties divided into 6 categories: maths library, activation functions, error-correcting networks, transfer function approximation, probability density estimation and reinforcement learning.
arXiv Detail & Related papers (2023-09-07T10:19:33Z) - Fully Automatic Neural Network Reduction for Formal Verification [8.017543518311196]
We introduce a fully automatic and sound reduction of neural networks using reachability analysis.<n>The soundness ensures that the verification of the reduced network entails the verification of the original network.<n>Our approach reduces large neural networks to a fraction of the original number of neurons and thus shortens the verification time to a similar degree.
arXiv Detail & Related papers (2023-05-03T07:13:47Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Verifying Low-dimensional Input Neural Networks via Input Quantization [12.42030531015912]
This paper revisits the original problem of verifying ACAS Xu networks.
We propose to prepend an input quantization layer to the network.
Our technique can deliver exact verification results immune to floating-point error.
arXiv Detail & Related papers (2021-08-18T03:42:05Z) - Fast Falsification of Neural Networks using Property Directed Testing [0.1529342790344802]
We propose a falsification algorithm for neural networks that directs the search for a counterexample.
Our algorithm uses a derivative-free sampling-based optimization method.
We show that our falsification procedure detects all the unsafe instances that other verification tools also report as unsafe.
arXiv Detail & Related papers (2021-04-26T09:16:27Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z) - Exploiting Verified Neural Networks via Floating Point Numerical Error [15.639601066641099]
Verifiers aspire to answer whether a neural network guarantees certain properties with respect to all inputs in a space.
We show that the negligence of floating point error leads to unsound verification that can be systematically exploited in practice.
arXiv Detail & Related papers (2020-03-06T03:58:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.