No Soundness in the Real World: On the Challenges of the Verification of Deployed Neural Networks
- URL: http://arxiv.org/abs/2506.01054v1
- Date: Sun, 01 Jun 2025 15:47:37 GMT
- Title: No Soundness in the Real World: On the Challenges of the Verification of Deployed Neural Networks
- Authors: Attila Szász, Balázs Bánhelyi, Márk Jelasity,
- Abstract summary: We argue that theoretical soundness does not imply practical soundness.<n>We create adversarial networks that detect and exploit features of the deployment environment.<n>We demonstrate that all the tested verifiers are vulnerable to our new deployment attacks.
- Score: 1.3108652488669736
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The ultimate goal of verification is to guarantee the safety of deployed neural networks. Here, we claim that all the state-of-the-art verifiers we are aware of fail to reach this goal. Our key insight is that theoretical soundness (bounding the full-precision output while computing with floating point) does not imply practical soundness (bounding the floating point output in a potentially stochastic environment). We prove this observation for the approaches that are currently used to achieve provable theoretical soundness, such as interval analysis and its variants. We also argue that achieving practical soundness is significantly harder computationally. We support our claims empirically as well by evaluating several well-known verification methods. To mislead the verifiers, we create adversarial networks that detect and exploit features of the deployment environment, such as the order and precision of floating point operations. We demonstrate that all the tested verifiers are vulnerable to our new deployment-specific attacks, which proves that they are not practically sound.
Related papers
- A Formally Verified Robustness Certifier for Neural Networks (Extended Version) [0.0]
Neural networks are susceptible to minor perturbations in input that cause them to misclassify.<n>Globally-robust neural networks employ a function to certify that the classification of an input cannot be altered by such a perturbation.<n>We describe the program, its specifications, and the important design decisions taken for its implementation and verification.
arXiv Detail & Related papers (2025-05-11T12:05:14Z) - Certifying Global Robustness for Deep Neural Networks [3.8556106468003613]
A globally deep neural network resists perturbations on all meaningful inputs.
Current robustness certification methods emphasize local robustness, struggling to scale and generalize.
This paper presents a systematic and efficient method to evaluate and verify global robustness for deep neural networks.
arXiv Detail & Related papers (2024-05-31T00:46:04Z) - Certified Human Trajectory Prediction [66.1736456453465]
We propose a certification approach tailored for trajectory prediction that provides guaranteed robustness.<n>To mitigate the inherent performance drop through certification, we propose a diffusion-based trajectory denoiser and integrate it into our method.<n>We demonstrate the accuracy and robustness of the certified predictors and highlight their advantages over the non-certified ones.
arXiv Detail & Related papers (2024-03-20T17:41:35Z) - Robust and efficient verification of graph states in blind
measurement-based quantum computation [52.70359447203418]
Blind quantum computation (BQC) is a secure quantum computation method that protects the privacy of clients.
It is crucial to verify whether the resource graph states are accurately prepared in the adversarial scenario.
Here, we propose a robust and efficient protocol for verifying arbitrary graph states with any prime local dimension.
arXiv Detail & Related papers (2023-05-18T06:24:45Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - Generalizability of Adversarial Robustness Under Distribution Shifts [57.767152566761304]
We take a first step towards investigating the interplay between empirical and certified adversarial robustness on one hand and domain generalization on another.
We train robust models on multiple domains and evaluate their accuracy and robustness on an unseen domain.
We extend our study to cover a real-world medical application, in which adversarial augmentation significantly boosts the generalization of robustness with minimal effect on clean data accuracy.
arXiv Detail & Related papers (2022-09-29T18:25:48Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Data-Driven Assessment of Deep Neural Networks with Random Input
Uncertainty [14.191310794366075]
We develop a data-driven optimization-based method capable of simultaneously certifying the safety of network outputs and localizing them.
We experimentally demonstrate the efficacy and tractability of the method on a deep ReLU network.
arXiv Detail & Related papers (2020-10-02T19:13:35Z) - Debona: Decoupled Boundary Network Analysis for Tighter Bounds and
Faster Adversarial Robustness Proofs [2.1320960069210484]
Neural networks are commonly used in safety-critical real-world applications.
Proving that either no such adversarial examples exist, or providing a concrete instance, is therefore crucial to ensure safe applications.
We provide proofs for tight upper and lower bounds on max-pooling layers in convolutional networks.
arXiv Detail & Related papers (2020-06-16T10:00:33Z) - Exploiting Verified Neural Networks via Floating Point Numerical Error [15.639601066641099]
Verifiers aspire to answer whether a neural network guarantees certain properties with respect to all inputs in a space.
We show that the negligence of floating point error leads to unsound verification that can be systematically exploited in practice.
arXiv Detail & Related papers (2020-03-06T03:58:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.