PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier
- URL: http://arxiv.org/abs/2006.10864v2
- Date: Sun, 18 Apr 2021 19:02:41 GMT
- Title: PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier
- Authors: Haitham Khedr, James Ferlez, and Yasser Shoukry
- Abstract summary: We introduce a new approach to formally verify the most commonly considered safety specifications for ReLU NNs.
We use a convex solver not only as a linear feasibility checker, but also as a means of penalizing the amount of relaxation allowed in solutions.
- Score: 1.1011268090482575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Networks (NNs) have increasingly apparent safety implications
commensurate with their proliferation in real-world applications: both
unanticipated as well as adversarial misclassifications can result in fatal
outcomes. As a consequence, techniques of formal verification have been
recognized as crucial to the design and deployment of safe NNs. In this paper,
we introduce a new approach to formally verify the most commonly considered
safety specifications for ReLU NNs -- i.e. polytopic specifications on the
input and output of the network. Like some other approaches, ours uses a
relaxed convex program to mitigate the combinatorial complexity of the problem.
However, unique in our approach is the way we use a convex solver not only as a
linear feasibility checker, but also as a means of penalizing the amount of
relaxation allowed in solutions. In particular, we encode each ReLU by means of
the usual linear constraints, and combine this with a convex objective function
that penalizes the discrepancy between the output of each neuron and its
relaxation. This convex function is further structured to force the largest
relaxations to appear closest to the input layer; this provides the further
benefit that the most problematic neurons are conditioned as early as possible,
when conditioning layer by layer. This paradigm can be leveraged to create a
verification algorithm that is not only faster in general than competing
approaches, but is also able to verify considerably more safety properties; we
evaluated PEREGRiNN on a standard MNIST robustness verification suite to
substantiate these claims.
Related papers
- Correctness Verification of Neural Networks Approximating Differential
Equations [0.0]
Neural Networks (NNs) approximate the solution of Partial Differential Equations (PDEs)
NNs can become integral parts of simulation software tools which can accelerate the simulation of complex dynamic systems more than 100 times.
This work addresses the verification of these functions by defining the NN derivative as a finite difference approximation.
For the first time, we tackle the problem of bounding an NN function without a priori knowledge of the output domain.
arXiv Detail & Related papers (2024-02-12T12:55:35Z) - Tight Certified Robustness via Min-Max Representations of ReLU Neural
Networks [9.771011198361865]
The reliable deployment of neural networks in control systems requires rigorous robustness guarantees.
In this paper, we obtain tight robustness certificates over convex representations of ReLU neural networks.
arXiv Detail & Related papers (2023-10-07T21:07:45Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Automated Repair of Neural Networks [0.26651200086513094]
We introduce a framework for repairing unsafe NNs w.r.t. safety specification.
Our method is able to search for a new, safe NN representation, by modifying only a few of its weight values.
We perform extensive experiments which demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
arXiv Detail & Related papers (2022-07-17T12:42:24Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - Debona: Decoupled Boundary Network Analysis for Tighter Bounds and
Faster Adversarial Robustness Proofs [2.1320960069210484]
Neural networks are commonly used in safety-critical real-world applications.
Proving that either no such adversarial examples exist, or providing a concrete instance, is therefore crucial to ensure safe applications.
We provide proofs for tight upper and lower bounds on max-pooling layers in convolutional networks.
arXiv Detail & Related papers (2020-06-16T10:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.