Constrained Feedforward Neural Network Training via Reachability
Analysis
- URL: http://arxiv.org/abs/2107.07696v1
- Date: Fri, 16 Jul 2021 04:03:01 GMT
- Title: Constrained Feedforward Neural Network Training via Reachability
Analysis
- Authors: Long Kiu Chung, Adam Dai, Derek Knowles, Shreyas Kousik, Grace X. Gao
- Abstract summary: It remains an open challenge to train a neural network to obey safety constraints.
This work proposes a constrained method to simultaneously train and verify a feedforward neural network with rectified linear unit (ReLU) nonlinearities.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks have recently become popular for a wide variety of uses, but
have seen limited application in safety-critical domains such as robotics near
and around humans. This is because it remains an open challenge to train a
neural network to obey safety constraints. Most existing safety-related methods
only seek to verify that already-trained networks obey constraints, requiring
alternating training and verification. Instead, this work proposes a
constrained method to simultaneously train and verify a feedforward neural
network with rectified linear unit (ReLU) nonlinearities. Constraints are
enforced by computing the network's output-space reachable set and ensuring
that it does not intersect with unsafe sets; training is achieved by
formulating a novel collision-check loss function between the reachable set and
unsafe portions of the output space. The reachable and unsafe sets are
represented by constrained zonotopes, a convex polytope representation that
enables differentiable collision checking. The proposed method is demonstrated
successfully on a network with one nonlinearity layer and approximately 50
parameters.
Related papers
- Safe Reach Set Computation via Neural Barrier Certificates [46.1784503246807]
We present a novel technique for online safety verification of autonomous systems.
Our approach uses barrier certificates given by parameterized neural networks that depend on a given initial set, unsafe sets, and time horizon.
Such networks are trained efficiently offline using system simulations sampled from regions of the state space.
arXiv Detail & Related papers (2024-04-29T15:49:37Z) - Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - DeepSaDe: Learning Neural Networks that Guarantee Domain Constraint
Satisfaction [8.29487992932196]
We present an approach to train neural networks which can enforce a wide variety of constraints and guarantee that the constraint is satisfied by all possible predictions.
Our approach is flexible enough to enforce a wide variety of domain constraints and is able to guarantee them in neural networks.
arXiv Detail & Related papers (2023-03-02T10:40:50Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - OVERT: An Algorithm for Safety Verification of Neural Network Control
Policies for Nonlinear Systems [31.3812947670948]
We present OVERT: a sound algorithm for safety verification of neural network control policies.
The central concept of OVERT is to abstract nonlinear functions with a set of optimally tight piecewise linear bounds.
Overt compares favorably to existing methods both in time and in tightness of the reachable set.
arXiv Detail & Related papers (2021-08-03T00:41:27Z) - Artificial Neural Networks generated by Low Discrepancy Sequences [59.51653996175648]
We generate artificial neural networks as random walks on a dense network graph.
Such networks can be trained sparse from scratch, avoiding the expensive procedure of training a dense network and compressing it afterwards.
We demonstrate that the artificial neural networks generated by low discrepancy sequences can achieve an accuracy within reach of their dense counterparts at a much lower computational complexity.
arXiv Detail & Related papers (2021-03-05T08:45:43Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural
Network Controllers via Semidefinite Programming [19.51345816555571]
We propose a novel forward reachability analysis method for the safety verification of linear time-varying systems with neural networks in feedback.
We show that we can compute these approximate reachable sets using semidefinite programming.
We illustrate our method in a quadrotor example, in which we first approximate a nonlinear model predictive controller via a deep neural network and then apply our analysis tool to certify finite-time reachability and constraint satisfaction of the closed-loop system.
arXiv Detail & Related papers (2020-04-16T18:48:25Z) - Reachability Analysis for Feed-Forward Neural Networks using Face
Lattices [10.838397735788245]
We propose a parallelizable technique to compute exact reachable sets of a neural network to an input set.
Our approach is capable of constructing the complete input set given an output set, so that any input that leads to safety violation can be tracked.
arXiv Detail & Related papers (2020-03-02T22:23:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.