Incremental Satisfiability Modulo Theory for Verification of Deep Neural
Networks
- URL: http://arxiv.org/abs/2302.06455v1
- Date: Fri, 10 Feb 2023 04:31:28 GMT
- Title: Incremental Satisfiability Modulo Theory for Verification of Deep Neural
Networks
- Authors: Pengfei Yang, Zhiming Chi, Zongxin Liu, Mengyu Zhao, Cheng-Chao Huang,
Shaowei Cai, and Lijun Zhang
- Abstract summary: We present an incremental satisfiability modulo theory (SMT) algorithm based on the Reluplex framework.
We implement our algorithm as an incremental solver called DeepInc, and exerimental results show that DeepInc is more efficient in most cases.
- Score: 22.015676101940077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constraint solving is an elementary way for verification of deep neural
networks (DNN). In the domain of AI safety, a DNN might be modified in its
structure and parameters for its repair or attack. For such situations, we
propose the incremental DNN verification problem, which asks whether a safety
property still holds after the DNN is modified. To solve the problem, we
present an incremental satisfiability modulo theory (SMT) algorithm based on
the Reluplex framework. We simulate the most important features of the
configurations that infers the verification result of the searching branches in
the old solving procedure (with respect to the original network), and
heuristically check whether the proofs are still valid for the modified DNN. We
implement our algorithm as an incremental solver called DeepInc, and
exerimental results show that DeepInc is more efficient in most cases. For the
cases that the property holds both before and after modification, the
acceleration can be faster by several orders of magnitude, showing that DeepInc
is outstanding in incrementally searching for counterexamples. Moreover, based
on the framework, we propose the multi-objective DNN repair problem and give an
algorithm based on our incremental SMT solving algorithm. Our repair method
preserves more potential safety properties on the repaired DNNs compared with
state-of-the-art.
Related papers
- A DPLL(T) Framework for Verifying Deep Neural Networks [9.422860826278788]
Like human-written software, Deep Neural Networks (DNNs) can have bugs and can be attacked.
We introduce NeuralSAT, a new verification approach that adapts the widely-used DPLL(T) algorithm used in modern SMT solvers.
arXiv Detail & Related papers (2023-07-17T18:49:46Z) - Incremental Verification of Neural Networks [3.9661031279983896]
We propose a new, general framework for incremental and complete DNN verification based on the design of novel theory, data structure, and algorithms.
Our contributions implemented in a tool named IVAN yield an overall geometric mean speedup of 2.4x for verifying challenging MNIST and CIFAR10 classifiers and a geometric mean speedup of 3.8x for the ACAS-XU classifiers over the state-of-the-art baselines.
arXiv Detail & Related papers (2023-04-04T15:28:22Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Automated Repair of Neural Networks [0.26651200086513094]
We introduce a framework for repairing unsafe NNs w.r.t. safety specification.
Our method is able to search for a new, safe NN representation, by modifying only a few of its weight values.
We perform extensive experiments which demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
arXiv Detail & Related papers (2022-07-17T12:42:24Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Provable Repair of Deep Neural Networks [8.55884254206878]
Deep Neural Networks (DNNs) have grown in popularity over the past decade and are now being used in safety-critical domains such as aircraft collision avoidance.
This paper tackles the problem of correcting a DNN once unsafe behavior is found.
We introduce the provable repair problem, which is the problem of repairing a network N to construct a new network N' that satisfies a given specification.
arXiv Detail & Related papers (2021-04-09T15:03:53Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.