A Formally Verified Robustness Certifier for Neural Networks (Extended Version)
- URL: http://arxiv.org/abs/2505.06958v1
- Date: Sun, 11 May 2025 12:05:14 GMT
- Title: A Formally Verified Robustness Certifier for Neural Networks (Extended Version)
- Authors: James Tobler, Hira Taqdees Syeda, Toby Murray,
- Abstract summary: Neural networks are susceptible to minor perturbations in input that cause them to misclassify.<n>Globally-robust neural networks employ a function to certify that the classification of an input cannot be altered by such a perturbation.<n>We describe the program, its specifications, and the important design decisions taken for its implementation and verification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are often susceptible to minor perturbations in input that cause them to misclassify. A recent solution to this problem is the use of globally-robust neural networks, which employ a function to certify that the classification of an input cannot be altered by such a perturbation. Outputs that pass this test are called certified robust. However, to the authors' knowledge, these certification functions have not yet been verified at the implementation level. We demonstrate how previous unverified implementations are exploitably unsound in certain circumstances. Moreover, they often rely on approximation-based algorithms, such as power iteration, that (perhaps surprisingly) do not guarantee soundness. To provide assurance that a given output is robust, we implemented and formally verified a certification function for globally-robust neural networks in Dafny. We describe the program, its specifications, and the important design decisions taken for its implementation and verification, as well as our experience applying it in practice.
Related papers
- Floating-Point Neural Network Verification at the Software Level [2.228696309685013]
The behaviour of neural network components must be proven correct before deployment in safety-critical systems.<n>Existing neural network verification techniques cannot certify the absence of faults at the software level.<n>We show how to specify and verify that neural networks are safe, by explicitly reasoning about their floating-point implementation.
arXiv Detail & Related papers (2025-10-27T14:43:19Z) - No Soundness in the Real World: On the Challenges of the Verification of Deployed Neural Networks [1.3108652488669736]
We argue that theoretical soundness does not imply practical soundness.<n>We create adversarial networks that detect and exploit features of the deployment environment.<n>We demonstrate that all the tested verifiers are vulnerable to our new deployment attacks.
arXiv Detail & Related papers (2025-06-01T15:47:37Z) - Verifying Global Neural Network Specifications using Hyperproperties [0.0]
We study global specifications that provide guarantees for all potential inputs.
Our formalism enables verifying global specifications using existing neural network verification approaches.
Recent success in verifying specific global specifications shows that attaining strong guarantees for all potential data points is feasible.
arXiv Detail & Related papers (2023-06-21T18:08:55Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Sound and Complete Verification of Polynomial Networks [55.9260539566555]
Polynomial Networks (PNs) have demonstrated promising performance on face and image recognition recently.
Existing verification algorithms on ReLU neural networks (NNs) based on branch and bound (BaB) techniques cannot be trivially applied to PN verification.
We devise a new bounding method, equipped with BaB for global convergence guarantees, called VPN.
arXiv Detail & Related papers (2022-09-15T11:50:43Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Towards Evading the Limits of Randomized Smoothing: A Theoretical
Analysis [74.85187027051879]
We show that it is possible to approximate the optimal certificate with arbitrary precision, by probing the decision boundary with several noise distributions.
This result fosters further research on classifier-specific certification and demonstrates that randomized smoothing is still worth investigating.
arXiv Detail & Related papers (2022-06-03T17:48:54Z) - Fast Falsification of Neural Networks using Property Directed Testing [0.1529342790344802]
We propose a falsification algorithm for neural networks that directs the search for a counterexample.
Our algorithm uses a derivative-free sampling-based optimization method.
We show that our falsification procedure detects all the unsafe instances that other verification tools also report as unsafe.
arXiv Detail & Related papers (2021-04-26T09:16:27Z) - Approaching Neural Network Uncertainty Realism [53.308409014122816]
Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
arXiv Detail & Related papers (2021-01-08T11:56:12Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - Data-Driven Assessment of Deep Neural Networks with Random Input
Uncertainty [14.191310794366075]
We develop a data-driven optimization-based method capable of simultaneously certifying the safety of network outputs and localizing them.
We experimentally demonstrate the efficacy and tractability of the method on a deep ReLU network.
arXiv Detail & Related papers (2020-10-02T19:13:35Z) - Exploiting Verified Neural Networks via Floating Point Numerical Error [15.639601066641099]
Verifiers aspire to answer whether a neural network guarantees certain properties with respect to all inputs in a space.
We show that the negligence of floating point error leads to unsound verification that can be systematically exploited in practice.
arXiv Detail & Related papers (2020-03-06T03:58:26Z) - Scalable Quantitative Verification For Deep Neural Networks [44.570783946111334]
We propose a test-driven verification framework for deep neural networks (DNNs)
Our technique performs enough tests until soundness of a formal probabilistic property can be proven.
Our work paves the way for verifying properties of distributions captured by real-world deep neural networks, with provable guarantees.
arXiv Detail & Related papers (2020-02-17T09:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.