Fast Falsification of Neural Networks using Property Directed Testing
- URL: http://arxiv.org/abs/2104.12418v1
- Date: Mon, 26 Apr 2021 09:16:27 GMT
- Title: Fast Falsification of Neural Networks using Property Directed Testing
- Authors: Moumita Das, Rajarshi Ray, Swarup Kumar Mohalik, Ansuman Banerjee
- Abstract summary: We propose a falsification algorithm for neural networks that directs the search for a counterexample.
Our algorithm uses a derivative-free sampling-based optimization method.
We show that our falsification procedure detects all the unsafe instances that other verification tools also report as unsafe.
- Score: 0.1529342790344802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are now extensively used in perception, prediction and
control of autonomous systems. Their deployment in safety-critical systems
brings forth the need for verification techniques for such networks. As an
alternative to exhaustive and costly verification algorithms, lightweight
falsification algorithms have been heavily used to search for an input to the
system that produces an unsafe output, i.e., a counterexample to the safety of
the system. In this work, we propose a falsification algorithm for neural
networks that directs the search for a counterexample, guided by a safety
property specification. Our algorithm uses a derivative-free sampling-based
optimization method. We evaluate our algorithm on 45 trained neural network
benchmarks of the ACAS Xu system against 10 safety properties. We show that our
falsification procedure detects all the unsafe instances that other
verification tools also report as unsafe. Moreover, in terms of performance,
our falsification procedure identifies most of the unsafe instances faster, in
comparison to the state-of-the-art verification tools for feed-forward neural
networks such as NNENUM and Neurify and in many instances, by orders of
magnitude.
Related papers
- The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Safety Verification of Neural Network Control Systems Using Guaranteed
Neural Network Model Reduction [3.0839245814393728]
A concept of model reduction precision is proposed to describe the guaranteed distance between the outputs of a neural network and its reduced-size version.
A reachability-based algorithm is proposed to accurately compute the model reduction precision.
An algorithm to compute the reachable set of the original system is developed, which is able to support much more computationally efficient safety verification processes.
arXiv Detail & Related papers (2023-01-17T03:25:57Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Automated Repair of Neural Networks [0.26651200086513094]
We introduce a framework for repairing unsafe NNs w.r.t. safety specification.
Our method is able to search for a new, safe NN representation, by modifying only a few of its weight values.
We perform extensive experiments which demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
arXiv Detail & Related papers (2022-07-17T12:42:24Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Towards Repairing Neural Networks Correctly [6.600380575920419]
We propose a runtime verification method to ensure the correctness of neural networks.
Experiment results show that our approach effectively generates neural networks which are guaranteed to satisfy the properties.
arXiv Detail & Related papers (2020-12-03T12:31:07Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Exploiting Verified Neural Networks via Floating Point Numerical Error [15.639601066641099]
Verifiers aspire to answer whether a neural network guarantees certain properties with respect to all inputs in a space.
We show that the negligence of floating point error leads to unsound verification that can be systematically exploited in practice.
arXiv Detail & Related papers (2020-03-06T03:58:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.