A Tool for Neural Network Global Robustness Certification and Training
- URL: http://arxiv.org/abs/2208.07289v1
- Date: Mon, 15 Aug 2022 15:58:16 GMT
- Title: A Tool for Neural Network Global Robustness Certification and Training
- Authors: Zhilu Wang, Yixuan Wang, Feisi Fu, Ruochen Jiao, Chao Huang, Wenchao
Li, Qi Zhu
- Abstract summary: A certified globally robust network can ensure its robustness on any possible network input.
The state-of-the-art global robustness certification algorithm can only certify networks with at most several thousand neurons.
We propose the GPU-supported global robustness certification framework GROCET, which is more efficient than the previous optimization-based certification approach.
- Score: 12.349979558107496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increment of interest in leveraging machine learning technology in
safety-critical systems, the robustness of neural networks under external
disturbance receives more and more concerns. Global robustness is a robustness
property defined on the entire input domain. And a certified globally robust
network can ensure its robustness on any possible network input. However, the
state-of-the-art global robustness certification algorithm can only certify
networks with at most several thousand neurons. In this paper, we propose the
GPU-supported global robustness certification framework GROCET, which is more
efficient than the previous optimization-based certification approach.
Moreover, GROCET provides differentiable global robustness, which is leveraged
in the training of globally robust neural networks.
Related papers
- Certifying Global Robustness for Deep Neural Networks [3.8556106468003613]
A globally deep neural network resists perturbations on all meaningful inputs.
Current robustness certification methods emphasize local robustness, struggling to scale and generalize.
This paper presents a systematic and efficient method to evaluate and verify global robustness for deep neural networks.
arXiv Detail & Related papers (2024-05-31T00:46:04Z) - The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in
Deep Learning [73.5095051707364]
We consider classical distribution-agnostic framework and algorithms minimising empirical risks.
We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks is extremely challenging.
arXiv Detail & Related papers (2023-09-13T16:33:27Z) - A Graph Transformer-Driven Approach for Network Robustness Learning [27.837847091520842]
This paper proposes a novel versatile and unified robustness learning approach via graph transformer (NRL-GT)
NRL-GT accomplishes the task of controllability robustness learning and connectivity robustness learning from multiple aspects.
It is also able to deal with complex networks of different size with low learning error and high efficiency.
arXiv Detail & Related papers (2023-06-12T07:34:21Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Efficient Global Robustness Certification of Neural Networks via
Interleaving Twin-Network Encoding [8.173681464694651]
We formulate the global robustness certification for neural networks with ReLU activation functions as a mixed-integer linear programming (MILP) problem.
Our approach includes a novel interleaving twin-network encoding scheme, where two copies of the neural network are encoded side-by-side.
A case study of closed-loop control safety verification is conducted, and demonstrates the importance and practicality of our approach.
arXiv Detail & Related papers (2022-03-26T19:23:37Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Globally-Robust Neural Networks [21.614262520734595]
We formalize a notion of global robustness, which captures the operational properties of on-line local robustness certification.
We show that widely-used architectures can be easily adapted to this objective by incorporating efficient global Lipschitz bounds into the network.
arXiv Detail & Related papers (2021-02-16T21:10:52Z) - Decentralized Control with Graph Neural Networks [147.84766857793247]
We propose a novel framework using graph neural networks (GNNs) to learn decentralized controllers.
GNNs are well-suited for the task since they are naturally distributed architectures and exhibit good scalability and transferability properties.
The problems of flocking and multi-agent path planning are explored to illustrate the potential of GNNs in learning decentralized controllers.
arXiv Detail & Related papers (2020-12-29T18:59:14Z) - Global Robustness Verification Networks [33.52550848953545]
We develop a global robustness verification framework with three components.
New network architecture Sliding Door Network (SDN) enabling feasible rule-based back-propagation''
We demonstrate the effectiveness of our approach on both synthetic and real datasets.
arXiv Detail & Related papers (2020-06-08T08:09:20Z) - Generative Adversarial Imitation Learning with Neural Networks: Global
Optimality and Convergence Rate [122.73276299136568]
Generative policy imitation learning (GAIL) demonstrates tremendous success in practice, especially when combined with neural networks.
Despite its empirical success, it remains unclear whether GAIL with neural networks converges to the globally optimal solution.
arXiv Detail & Related papers (2020-03-08T03:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.