Certifying Global Robustness for Deep Neural Networks
- URL: http://arxiv.org/abs/2405.20556v1
- Date: Fri, 31 May 2024 00:46:04 GMT
- Title: Certifying Global Robustness for Deep Neural Networks
- Authors: You Li, Guannan Zhao, Shuyu Kong, Yunqi He, Hai Zhou,
- Abstract summary: A globally deep neural network resists perturbations on all meaningful inputs.
Current robustness certification methods emphasize local robustness, struggling to scale and generalize.
This paper presents a systematic and efficient method to evaluate and verify global robustness for deep neural networks.
- Score: 3.8556106468003613
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A globally robust deep neural network resists perturbations on all meaningful inputs. Current robustness certification methods emphasize local robustness, struggling to scale and generalize. This paper presents a systematic and efficient method to evaluate and verify global robustness for deep neural networks, leveraging the PAC verification framework for solid guarantees on verification results. We utilize probabilistic programs to characterize meaningful input regions, setting a realistic standard for global robustness. Additionally, we introduce the cumulative robustness curve as a criterion in evaluating global robustness. We design a statistical method that combines multi-level splitting and regression analysis for the estimation, significantly reducing the execution time. Experimental results demonstrate the efficiency and effectiveness of our verification method and its capability to find rare and diversified counterexamples for adversarial training.
Related papers
- Distributionally Robust Statistical Verification with Imprecise Neural
Networks [4.094049541486327]
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems.
This paper proposes a novel approach based on a combination of active learning, uncertainty quantification, and neural network verification.
arXiv Detail & Related papers (2023-08-28T18:06:24Z) - Boosting Adversarial Robustness using Feature Level Stochastic Smoothing [46.86097477465267]
adversarial defenses have led to a significant improvement in the robustness of Deep Neural Networks.
In this work, we propose a generic method for introducingity in the network predictions.
We also utilize this for smoothing decision rejecting low confidence predictions.
arXiv Detail & Related papers (2023-06-10T15:11:24Z) - Using Z3 for Formal Modeling and Verification of FNN Global Robustness [15.331024247043999]
We propose a complete specification and implementation of DeepGlobal utilizing the SMT solver Z3 for more explicit definition.
To evaluate the effectiveness of our implementation and improvements, we conduct extensive experiments on a set of benchmark datasets.
arXiv Detail & Related papers (2023-04-20T15:40:22Z) - GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models [60.48306899271866]
We present a new framework, called GREAT Score, for global robustness evaluation of adversarial perturbation using generative models.
We show high correlation and significantly reduced cost of GREAT Score when compared to the attack-based model ranking on RobustBench.
GREAT Score can be used for remote auditing of privacy-sensitive black-box models.
arXiv Detail & Related papers (2023-04-19T14:58:27Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Generalizability of Adversarial Robustness Under Distribution Shifts [57.767152566761304]
We take a first step towards investigating the interplay between empirical and certified adversarial robustness on one hand and domain generalization on another.
We train robust models on multiple domains and evaluate their accuracy and robustness on an unseen domain.
We extend our study to cover a real-world medical application, in which adversarial augmentation significantly boosts the generalization of robustness with minimal effect on clean data accuracy.
arXiv Detail & Related papers (2022-09-29T18:25:48Z) - Efficient Global Robustness Certification of Neural Networks via
Interleaving Twin-Network Encoding [8.173681464694651]
We formulate the global robustness certification for neural networks with ReLU activation functions as a mixed-integer linear programming (MILP) problem.
Our approach includes a novel interleaving twin-network encoding scheme, where two copies of the neural network are encoded side-by-side.
A case study of closed-loop control safety verification is conducted, and demonstrates the importance and practicality of our approach.
arXiv Detail & Related papers (2022-03-26T19:23:37Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Globally-Robust Neural Networks [21.614262520734595]
We formalize a notion of global robustness, which captures the operational properties of on-line local robustness certification.
We show that widely-used architectures can be easily adapted to this objective by incorporating efficient global Lipschitz bounds into the network.
arXiv Detail & Related papers (2021-02-16T21:10:52Z) - Data-Driven Assessment of Deep Neural Networks with Random Input
Uncertainty [14.191310794366075]
We develop a data-driven optimization-based method capable of simultaneously certifying the safety of network outputs and localizing them.
We experimentally demonstrate the efficacy and tractability of the method on a deep ReLU network.
arXiv Detail & Related papers (2020-10-02T19:13:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.