Using Z3 for Formal Modeling and Verification of FNN Global Robustness
- URL: http://arxiv.org/abs/2304.10558v2
- Date: Mon, 24 Apr 2023 05:53:03 GMT
- Title: Using Z3 for Formal Modeling and Verification of FNN Global Robustness
- Authors: Yihao Zhang, Zeming Wei, Xiyue Zhang, Meng Sun
- Abstract summary: We propose a complete specification and implementation of DeepGlobal utilizing the SMT solver Z3 for more explicit definition.
To evaluate the effectiveness of our implementation and improvements, we conduct extensive experiments on a set of benchmark datasets.
- Score: 15.331024247043999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Feedforward Neural Networks (FNNs) have achieved remarkable success in
various tasks, they are vulnerable to adversarial examples. Several techniques
have been developed to verify the adversarial robustness of FNNs, but most of
them focus on robustness verification against the local perturbation
neighborhood of a single data point. There is still a large research gap in
global robustness analysis. The global-robustness verifiable framework
DeepGlobal has been proposed to identify \textit{all} possible Adversarial
Dangerous Regions (ADRs) of FNNs, not limited to data samples in a test set. In
this paper, we propose a complete specification and implementation of
DeepGlobal utilizing the SMT solver Z3 for more explicit definition, and
propose several improvements to DeepGlobal for more efficient verification. To
evaluate the effectiveness of our implementation and improvements, we conduct
extensive experiments on a set of benchmark datasets. Visualization of our
experiment results shows the validity and effectiveness of the approach.
Related papers
- Certifying Global Robustness for Deep Neural Networks [3.8556106468003613]
A globally deep neural network resists perturbations on all meaningful inputs.
Current robustness certification methods emphasize local robustness, struggling to scale and generalize.
This paper presents a systematic and efficient method to evaluate and verify global robustness for deep neural networks.
arXiv Detail & Related papers (2024-05-31T00:46:04Z) - A Survey of Graph Neural Networks in Real world: Imbalance, Noise,
Privacy and OOD Challenges [75.37448213291668]
This paper systematically reviews existing Graph Neural Networks (GNNs)
We first highlight the four key challenges faced by existing GNNs, paving the way for our exploration of real-world GNN models.
arXiv Detail & Related papers (2024-03-07T13:10:37Z) - Locate and Verify: A Two-Stream Network for Improved Deepfake Detection [33.50963446256726]
Current deepfake detection methods are typically inadequate in generalizability.
We propose an innovative two-stream network that effectively enlarges the potential regions from which the model extracts evidence.
We also propose a Semi-supervised Patch Similarity Learning strategy to estimate patch-level forged location annotations.
arXiv Detail & Related papers (2023-09-20T08:25:19Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Exploring the Physical World Adversarial Robustness of Vehicle Detection [13.588120545886229]
Adrial attacks can compromise the robustness of real-world detection models.
We propose an innovative instant-level data generation pipeline using the CARLA simulator.
Our findings highlight diverse model performances under adversarial conditions.
arXiv Detail & Related papers (2023-08-07T11:09:12Z) - Generalizability of Adversarial Robustness Under Distribution Shifts [57.767152566761304]
We take a first step towards investigating the interplay between empirical and certified adversarial robustness on one hand and domain generalization on another.
We train robust models on multiple domains and evaluate their accuracy and robustness on an unseen domain.
We extend our study to cover a real-world medical application, in which adversarial augmentation significantly boosts the generalization of robustness with minimal effect on clean data accuracy.
arXiv Detail & Related papers (2022-09-29T18:25:48Z) - Unsupervised Domain Adaptation for Monocular 3D Object Detection via
Self-Training [57.25828870799331]
We propose STMono3D, a new self-teaching framework for unsupervised domain adaptation on Mono3D.
We develop a teacher-student paradigm to generate adaptive pseudo labels on the target domain.
STMono3D achieves remarkable performance on all evaluated datasets and even surpasses fully supervised results on the KITTI 3D object detection dataset.
arXiv Detail & Related papers (2022-04-25T12:23:07Z) - On the Robustness and Generalization of Deep Learning Driven Full
Waveform Inversion [2.5382095320488665]
Full Waveform Inversion (FWI) is commonly epitomized as an image-to-image translation task.
Despite being trained with synthetic data, the deep learning-driven FWI is expected to perform well when evaluated with sufficient real-world data.
We study such properties by asking: how robust are these deep neural networks and how do they generalize?
arXiv Detail & Related papers (2021-11-28T19:27:59Z) - A Survey on Assessing the Generalization Envelope of Deep Neural
Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art performance on numerous applications.
It is difficult to tell beforehand if a DNN receiving an input will deliver the correct output since their decision criteria are usually nontransparent.
This survey connects the three fields within the larger framework of investigating the generalization performance of machine learning methods and in particular DNNs.
arXiv Detail & Related papers (2020-08-21T09:12:52Z) - Global Robustness Verification Networks [33.52550848953545]
We develop a global robustness verification framework with three components.
New network architecture Sliding Door Network (SDN) enabling feasible rule-based back-propagation''
We demonstrate the effectiveness of our approach on both synthetic and real datasets.
arXiv Detail & Related papers (2020-06-08T08:09:20Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.