Provable Repair of Deep Neural Networks
- URL: http://arxiv.org/abs/2104.04413v1
- Date: Fri, 9 Apr 2021 15:03:53 GMT
- Title: Provable Repair of Deep Neural Networks
- Authors: Matthew Sotoudeh and Aditya V. Thakur
- Abstract summary: Deep Neural Networks (DNNs) have grown in popularity over the past decade and are now being used in safety-critical domains such as aircraft collision avoidance.
This paper tackles the problem of correcting a DNN once unsafe behavior is found.
We introduce the provable repair problem, which is the problem of repairing a network N to construct a new network N' that satisfies a given specification.
- Score: 8.55884254206878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have grown in popularity over the past decade and
are now being used in safety-critical domains such as aircraft collision
avoidance. This has motivated a large number of techniques for finding unsafe
behavior in DNNs. In contrast, this paper tackles the problem of correcting a
DNN once unsafe behavior is found. We introduce the provable repair problem,
which is the problem of repairing a network N to construct a new network N'
that satisfies a given specification. If the safety specification is over a
finite set of points, our Provable Point Repair algorithm can find a provably
minimal repair satisfying the specification, regardless of the activation
functions used. For safety specifications addressing convex polytopes
containing infinitely many points, our Provable Polytope Repair algorithm can
find a provably minimal repair satisfying the specification for DNNs using
piecewise-linear activation functions. The key insight behind both of these
algorithms is the introduction of a Decoupled DNN architecture, which allows us
to reduce provable repair to a linear programming problem. Our experimental
results demonstrate the efficiency and effectiveness of our Provable Repair
algorithms on a variety of challenging tasks.
Related papers
- Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Architecture-Preserving Provable Repair of Deep Neural Networks [2.4687962186994663]
Deep neural networks (DNNs) are becoming increasingly important components of software, and are considered the state-of-the-art solution for a number of problems.
This paper addresses the problem of architecture-preserving V-polytope provable repair of DNNs.
arXiv Detail & Related papers (2023-04-07T06:36:41Z) - Incremental Satisfiability Modulo Theory for Verification of Deep Neural
Networks [22.015676101940077]
We present an incremental satisfiability modulo theory (SMT) algorithm based on the Reluplex framework.
We implement our algorithm as an incremental solver called DeepInc, and exerimental results show that DeepInc is more efficient in most cases.
arXiv Detail & Related papers (2023-02-10T04:31:28Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Automated Repair of Neural Networks [0.26651200086513094]
We introduce a framework for repairing unsafe NNs w.r.t. safety specification.
Our method is able to search for a new, safe NN representation, by modifying only a few of its weight values.
We perform extensive experiments which demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
arXiv Detail & Related papers (2022-07-17T12:42:24Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.