Automated Repair of Neural Networks
- URL: http://arxiv.org/abs/2207.08157v1
- Date: Sun, 17 Jul 2022 12:42:24 GMT
- Title: Automated Repair of Neural Networks
- Authors: Dor Cohen, Ofer Strichman
- Abstract summary: We introduce a framework for repairing unsafe NNs w.r.t. safety specification.
Our method is able to search for a new, safe NN representation, by modifying only a few of its weight values.
We perform extensive experiments which demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
- Score: 0.26651200086513094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last decade, Neural Networks (NNs) have been widely used in numerous
applications including safety-critical ones such as autonomous systems. Despite
their emerging adoption, it is well known that NNs are susceptible to
Adversarial Attacks. Hence, it is highly important to provide guarantees that
such systems work correctly. To remedy these issues we introduce a framework
for repairing unsafe NNs w.r.t. safety specification, that is by utilizing
satisfiability modulo theories (SMT) solvers. Our method is able to search for
a new, safe NN representation, by modifying only a few of its weight values. In
addition, our technique attempts to maximize the similarity to original network
with regard to its decision boundaries. We perform extensive experiments which
demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
the Adversarial Robustness property, with only a mild loss of accuracy (in
terms of similarity). Moreover, we compare our method with a naive baseline to
empirically prove its effectiveness. To conclude, we provide an algorithm to
automatically repair NNs given safety properties, and suggest a few heuristics
to improve its computational performance. Currently, by following this approach
we are capable of producing small-sized (i.e., with up to few hundreds of
parameters) correct NNs, composed of the piecewise linear ReLU activation
function. Nevertheless, our framework is general in the sense that it can
synthesize NNs w.r.t. any decidable fragment of first-order logic
specification.
Related papers
- Provably Safe Neural Network Controllers via Differential Dynamic Logic [2.416907802598482]
We present the first general approach that allows reusing control theory results for NNCS verification.
Based on provably safe control envelopes in dL, we derive specifications for the NN which is proven via NN verification.
We show that a proof of the NN adhering to the specification is mirrored by a dL proof on the infinite-time safety of the NNCS.
arXiv Detail & Related papers (2024-02-16T16:15:25Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Scaling Model Checking for DNN Analysis via State-Space Reduction and
Input Segmentation (Extended Version) [12.272381003294026]
Existing frameworks provide robustness and/or safety guarantees for the trained NNs.
We proposed FANNet, the first model checking-based framework for analyzing a broader range of NN properties.
This work develops state-space reduction and input segmentation approaches, to improve the scalability and timing efficiency of formal NN analysis.
arXiv Detail & Related papers (2023-06-29T22:18:07Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z) - Provable Repair of Deep Neural Networks [8.55884254206878]
Deep Neural Networks (DNNs) have grown in popularity over the past decade and are now being used in safety-critical domains such as aircraft collision avoidance.
This paper tackles the problem of correcting a DNN once unsafe behavior is found.
We introduce the provable repair problem, which is the problem of repairing a network N to construct a new network N' that satisfies a given specification.
arXiv Detail & Related papers (2021-04-09T15:03:53Z) - Provably Correct Training of Neural Network Controllers Using
Reachability Analysis [3.04585143845864]
We consider the problem of training neural network (NN) controllers for cyber-physical systems that are guaranteed to satisfy safety and liveness properties.
Our approach is to combine model-based design methodologies for dynamical systems with data-driven approaches to achieve this target.
arXiv Detail & Related papers (2021-02-22T07:08:11Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.