Verifying Quantized Neural Networks using SMT-Based Model Checking
- URL: http://arxiv.org/abs/2106.05997v1
- Date: Thu, 10 Jun 2021 18:27:45 GMT
- Title: Verifying Quantized Neural Networks using SMT-Based Model Checking
- Authors: Luiz Sena, Xidan Song, Erickson Alves, Iury Bessa, Edoardo Manino,
Lucas Cordeiro
- Abstract summary: We develop and evaluate a symbolic verification framework using incremental model checking (IMC) and satisfiability modulo theories (SMT)
We can provide guarantees on the safe behavior of ANNs implemented both in floating-point and fixed-point arithmetic.
For small- to medium-sized ANNs, our approach completes most of its verification runs in minutes.
- Score: 2.38142799291692
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Neural Networks (ANNs) are being deployed on an increasing number
of safety-critical applications, including autonomous cars and medical
diagnosis. However, concerns about their reliability have been raised due to
their black-box nature and apparent fragility to adversarial attacks. Here, we
develop and evaluate a symbolic verification framework using incremental model
checking (IMC) and satisfiability modulo theories (SMT) to check for
vulnerabilities in ANNs. More specifically, we propose several ANN-related
optimizations for IMC, including invariant inference via interval analysis and
the discretization of non-linear activation functions. With this, we can
provide guarantees on the safe behavior of ANNs implemented both in
floating-point and fixed-point (quantized) arithmetic. In this regard, our
verification approach was able to verify and produce adversarial examples for
52 test cases spanning image classification and general machine learning
applications. For small- to medium-sized ANN, our approach completes most of
its verification runs in minutes. Moreover, in contrast to most
state-of-the-art methods, our approach is not restricted to specific choices of
activation functions or non-quantized representations.
Related papers
- Feature Attenuation of Defective Representation Can Resolve Incomplete Masking on Anomaly Detection [1.0358639819750703]
In unsupervised anomaly detection (UAD) research, it is necessary to develop a computationally efficient and scalable solution.
We revisit the reconstruction-by-inpainting approach and rethink to improve it by analyzing strengths and weaknesses.
We propose Feature Attenuation of Defective Representation (FADeR) that only employs two layers which attenuates feature information of anomaly reconstruction.
arXiv Detail & Related papers (2024-07-05T15:44:53Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Abstraction and Refinement: Towards Scalable and Exact Verification of
Neural Networks [9.85360493553261]
Deep neural networks (DNNs) have been increasingly deployed in practice, but the lack of robustness hinders their applications in safety-critical domains.
We present a novel abstraction-refinement approach for scalable and exact DNN verification.
arXiv Detail & Related papers (2022-07-02T07:04:20Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Selective and Features based Adversarial Example Detection [12.443388374869745]
Security-sensitive applications that relay on Deep Neural Networks (DNNs) are vulnerable to small perturbations crafted to generate Adversarial Examples (AEs)
We propose a novel unsupervised detection mechanism that uses the selective prediction, processing model layers outputs, and knowledge transfer concepts in a multi-task learning setting.
Experimental results show that the proposed approach achieves comparable results to the state-of-the-art methods against tested attacks in white box scenario and better results in black and gray boxes scenarios.
arXiv Detail & Related papers (2021-03-09T11:06:15Z) - Incremental Verification of Fixed-Point Implementations of Neural
Networks [0.19573380763700707]
We develop and evaluate a novel symbolic verification framework using incremental bounded model checking (BMC), satisfiability modulo theories (SMT), and invariant inference.
Our approach was able to verify and produce examples for 85.8% of 21 test cases considering different input images, and 100% of the properties related to covering methods.
arXiv Detail & Related papers (2020-12-21T10:03:44Z) - A Simple Framework to Quantify Different Types of Uncertainty in Deep
Neural Networks for Image Classification [0.0]
Quantifying uncertainty in a model's predictions is important as it enables the safety of an AI system to be increased.
This is crucial for applications where the cost of an error is high, such as in autonomous vehicle control, medical image analysis, financial estimations or legal fields.
We propose a complete framework to capture and quantify three known types of uncertainty in Deep Neural Networks for the task of image classification.
arXiv Detail & Related papers (2020-11-17T15:36:42Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.