Provably-Safe Neural Network Training Using Hybrid Zonotope Reachability Analysis
- URL: http://arxiv.org/abs/2501.13023v2
- Date: Tue, 01 Apr 2025 01:01:48 GMT
- Title: Provably-Safe Neural Network Training Using Hybrid Zonotope Reachability Analysis
- Authors: Long Kiu Chung, Shreyas Kousik,
- Abstract summary: It is difficult to enforce constraints on neural networks in safety-critical control applications.<n>We propose a method that can encourage exact image of a non-avoidine input set for a neural network with rectified linear unit (ReLU) nonlinearities.<n>We demonstrate the practicality of our method by training a forward-invariant neural network controller for non-avoidine input to a safety-critical system.
- Score: 0.46040036610482665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Even though neural networks are being increasingly deployed in safety-critical control applications, it remains difficult to enforce constraints on their output, meaning that it is hard to guarantee safety in such settings. While many existing methods seek to verify a neural network's satisfaction of safety constraints, few address how to correct an unsafe network. The handful of works that extract a training signal from verification cannot handle non-convex sets, and are either conservative or slow. To begin addressing these challenges, this work proposes a neural network training method that can encourage the exact image of a non-convex input set for a neural network with rectified linear unit (ReLU) nonlinearities to avoid a non-convex unsafe region. This is accomplished by reachability analysis with scaled hybrid zonotopes, a modification of the existing hybrid zonotope set representation that enables parameterized scaling of non-convex polytopic sets with a differentiable collision check via mixed-integer linear programs (MILPs). The proposed method was shown to be effective and fast for networks with up to 240 neurons, with the computational complexity dominated by inverse operations on matrices that scale linearly in size with the number of neurons and complexity of input and unsafe sets. We demonstrate the practicality of our method by training a forward-invariant neural network controller for a non-convex input set to an affine system, as well as generating safe reach-avoid plans for a black-box dynamical system.
Related papers
- Efficient Reachability Analysis for Convolutional Neural Networks Using Hybrid Zonotopes [4.32258850473064]
Existing set propagation-based reachability analysis methods for feedforward neural networks often struggle to achieve both scalability and accuracy.
This work presents a novel set-based approach for computing the reachable sets of convolutional neural networks.
arXiv Detail & Related papers (2025-03-13T19:45:26Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Robustness Certificates for Implicit Neural Networks: A Mixed Monotone
Contractive Approach [60.67748036747221]
Implicit neural networks offer competitive performance and reduced memory consumption.
They can remain brittle with respect to input adversarial perturbations.
This paper proposes a theoretical and computational framework for robustness verification of implicit neural networks.
arXiv Detail & Related papers (2021-12-10T03:08:55Z) - OVERT: An Algorithm for Safety Verification of Neural Network Control
Policies for Nonlinear Systems [31.3812947670948]
We present OVERT: a sound algorithm for safety verification of neural network control policies.
The central concept of OVERT is to abstract nonlinear functions with a set of optimally tight piecewise linear bounds.
Overt compares favorably to existing methods both in time and in tightness of the reachable set.
arXiv Detail & Related papers (2021-08-03T00:41:27Z) - Constrained Feedforward Neural Network Training via Reachability
Analysis [0.0]
It remains an open challenge to train a neural network to obey safety constraints.
This work proposes a constrained method to simultaneously train and verify a feedforward neural network with rectified linear unit (ReLU) nonlinearities.
arXiv Detail & Related papers (2021-07-16T04:03:01Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier [1.1011268090482575]
We introduce a new approach to formally verify the most commonly considered safety specifications for ReLU NNs.
We use a convex solver not only as a linear feasibility checker, but also as a means of penalizing the amount of relaxation allowed in solutions.
arXiv Detail & Related papers (2020-06-18T21:33:07Z) - Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural
Network Controllers via Semidefinite Programming [19.51345816555571]
We propose a novel forward reachability analysis method for the safety verification of linear time-varying systems with neural networks in feedback.
We show that we can compute these approximate reachable sets using semidefinite programming.
We illustrate our method in a quadrotor example, in which we first approximate a nonlinear model predictive controller via a deep neural network and then apply our analysis tool to certify finite-time reachability and constraint satisfaction of the closed-loop system.
arXiv Detail & Related papers (2020-04-16T18:48:25Z) - Reachability Analysis for Feed-Forward Neural Networks using Face
Lattices [10.838397735788245]
We propose a parallelizable technique to compute exact reachable sets of a neural network to an input set.
Our approach is capable of constructing the complete input set given an output set, so that any input that leads to safety violation can be tracked.
arXiv Detail & Related papers (2020-03-02T22:23:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.