VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees
- URL: http://arxiv.org/abs/2312.09748v2
- Date: Mon, 10 Jun 2024 09:35:57 GMT
- Title: VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees
- Authors: Anahita Baninajjar, Ahmed Rezine, Amir Aminifar,
- Abstract summary: We propose a novel framework to generate Verification-Friendly Neural Networks (VNNs)
We present a post-training optimization framework to achieve a balance between preserving prediction performance and verification-friendliness.
- Score: 3.208888890455612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning techniques often lack formal correctness guarantees, evidenced by the widespread adversarial examples that plague most deep-learning applications. This lack of formal guarantees resulted in several research efforts that aim at verifying Deep Neural Networks (DNNs), with a particular focus on safety-critical applications. However, formal verification techniques still face major scalability and precision challenges. The over-approximation introduced during the formal verification process to tackle the scalability challenge often results in inconclusive analysis. To address this challenge, we propose a novel framework to generate Verification-Friendly Neural Networks (VNNs). We present a post-training optimization framework to achieve a balance between preserving prediction performance and verification-friendliness. Our proposed framework results in VNNs that are comparable to the original DNNs in terms of prediction performance, while amenable to formal verification techniques. This essentially enables us to establish robustness for more VNNs than their DNN counterparts, in a time-efficient manner.
Related papers
- Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - CARE: Certifiably Robust Learning with Reasoning via Variational
Inference [26.210129662748862]
We propose a certifiably robust learning with reasoning pipeline (CARE)
CARE achieves significantly higher certified robustness compared with the state-of-the-art baselines.
We additionally conducted different ablation studies to demonstrate the empirical robustness of CARE and the effectiveness of different knowledge integration.
arXiv Detail & Related papers (2022-09-12T07:15:52Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Scalable Verification of GNN-based Job Schedulers [16.7289491091472]
Graph Neural Networks (GNNs) have been applied for scheduling jobs over achieving better performance than hand-crafted clusters.
We develop GNN-Verify, the first general framework for verifying both single-step and multi-step properties of these schedulers.
arXiv Detail & Related papers (2022-03-07T06:13:04Z) - PAC Confidence Predictions for Deep Neural Network Classifiers [28.61937254015157]
Key challenge for deploying deep neural networks (DNNs) in safety critical settings is the need to provide rigorous ways to quantify their uncertainty.
We propose an algorithm for constructing predicted classification confidences for DNNs that comes with provable correctness guarantees.
arXiv Detail & Related papers (2020-11-02T04:09:17Z) - Efficient Exact Verification of Binarized Neural Networks [15.639601066641099]
Binarized Neural Networks (BNNs) provide comparable robustness and allow exact and significantly more efficient verification.
We present a new system, EEV, for efficient and exact verification of BNNs.
We demonstrate the effectiveness of EEV by presenting the first exact verification results for L-inf-bounded adversarial robustness of nontrivial convolutional BNNs.
arXiv Detail & Related papers (2020-05-07T16:34:30Z) - Scalable Quantitative Verification For Deep Neural Networks [44.570783946111334]
We propose a test-driven verification framework for deep neural networks (DNNs)
Our technique performs enough tests until soundness of a formal probabilistic property can be proven.
Our work paves the way for verifying properties of distributions captured by real-world deep neural networks, with provable guarantees.
arXiv Detail & Related papers (2020-02-17T09:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.