Global Robustness Verification Networks
- URL: http://arxiv.org/abs/2006.04403v1
- Date: Mon, 8 Jun 2020 08:09:20 GMT
- Title: Global Robustness Verification Networks
- Authors: Weidi Sun, Yuteng Lu, Xiyue Zhang, Zhanxing Zhu and Meng Sun
- Abstract summary: We develop a global robustness verification framework with three components.
New network architecture Sliding Door Network (SDN) enabling feasible rule-based back-propagation''
We demonstrate the effectiveness of our approach on both synthetic and real datasets.
- Score: 33.52550848953545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The wide deployment of deep neural networks, though achieving great success
in many domains, has severe safety and reliability concerns. Existing
adversarial attack generation and automatic verification techniques cannot
formally verify whether a network is globally robust, i.e., the absence or not
of adversarial examples in the input space. To address this problem, we develop
a global robustness verification framework with three components: 1) a novel
rule-based ``back-propagation'' finding which input region is responsible for
the class assignment by logic reasoning; 2) a new network architecture Sliding
Door Network (SDN) enabling feasible rule-based ``back-propagation''; 3) a
region-based global robustness verification (RGRV) approach. Moreover, we
demonstrate the effectiveness of our approach on both synthetic and real
datasets.
Related papers
- Certifying Global Robustness for Deep Neural Networks [3.8556106468003613]
A globally deep neural network resists perturbations on all meaningful inputs.
Current robustness certification methods emphasize local robustness, struggling to scale and generalize.
This paper presents a systematic and efficient method to evaluate and verify global robustness for deep neural networks.
arXiv Detail & Related papers (2024-05-31T00:46:04Z) - Efficient Network Representation for GNN-based Intrusion Detection [2.321323878201932]
The last decades have seen a growth in the number of cyber-attacks with severe economic and privacy damages.
We propose a novel network representation as a graph of flows that aims to provide relevant topological information for the intrusion detection task.
We present a Graph Neural Network (GNN) based framework responsible for exploiting the proposed graph structure.
arXiv Detail & Related papers (2023-09-11T16:10:12Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Using Z3 for Formal Modeling and Verification of FNN Global Robustness [15.331024247043999]
We propose a complete specification and implementation of DeepGlobal utilizing the SMT solver Z3 for more explicit definition.
To evaluate the effectiveness of our implementation and improvements, we conduct extensive experiments on a set of benchmark datasets.
arXiv Detail & Related papers (2023-04-20T15:40:22Z) - Generalizability of Adversarial Robustness Under Distribution Shifts [57.767152566761304]
We take a first step towards investigating the interplay between empirical and certified adversarial robustness on one hand and domain generalization on another.
We train robust models on multiple domains and evaluate their accuracy and robustness on an unseen domain.
We extend our study to cover a real-world medical application, in which adversarial augmentation significantly boosts the generalization of robustness with minimal effect on clean data accuracy.
arXiv Detail & Related papers (2022-09-29T18:25:48Z) - A Tool for Neural Network Global Robustness Certification and Training [12.349979558107496]
A certified globally robust network can ensure its robustness on any possible network input.
The state-of-the-art global robustness certification algorithm can only certify networks with at most several thousand neurons.
We propose the GPU-supported global robustness certification framework GROCET, which is more efficient than the previous optimization-based certification approach.
arXiv Detail & Related papers (2022-08-15T15:58:16Z) - Efficient Global Robustness Certification of Neural Networks via
Interleaving Twin-Network Encoding [8.173681464694651]
We formulate the global robustness certification for neural networks with ReLU activation functions as a mixed-integer linear programming (MILP) problem.
Our approach includes a novel interleaving twin-network encoding scheme, where two copies of the neural network are encoded side-by-side.
A case study of closed-loop control safety verification is conducted, and demonstrates the importance and practicality of our approach.
arXiv Detail & Related papers (2022-03-26T19:23:37Z) - Unsupervised Domain Adaptive 3D Detection with Multi-Level Consistency [90.71745178767203]
Deep learning-based 3D object detection has achieved unprecedented success with the advent of large-scale autonomous driving datasets.
Existing 3D domain adaptive detection methods often assume prior access to the target domain annotations, which is rarely feasible in the real world.
We study a more realistic setting, unsupervised 3D domain adaptive detection, which only utilizes source domain annotations.
arXiv Detail & Related papers (2021-07-23T17:19:23Z) - Full network nonlocality [68.8204255655161]
We introduce the concept of full network nonlocality, which describes correlations that necessitate all links in a network to distribute nonlocal resources.
We show that the most well-known network Bell test does not witness full network nonlocality.
More generally, we point out that established methods for analysing local and theory-independent correlations in networks can be combined in order to deduce sufficient conditions for full network nonlocality.
arXiv Detail & Related papers (2021-05-19T18:00:02Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.