Robustness Analysis of Neural Networks via Efficient Partitioning with
Applications in Control Systems
- URL: http://arxiv.org/abs/2010.00540v2
- Date: Mon, 7 Dec 2020 17:41:30 GMT
- Title: Robustness Analysis of Neural Networks via Efficient Partitioning with
Applications in Control Systems
- Authors: Michael Everett, Golnaz Habibi, Jonathan P. How
- Abstract summary: Neural networks (NNs) are now routinely implemented on systems that must operate in uncertain environments.
This paper unifies propagation and partition approaches to provide a family of robustness analysis algorithms.
New partitioning techniques are aware of their current bound estimates and desired boundary shape.
- Score: 45.35808135708894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks (NNs) are now routinely implemented on systems that must
operate in uncertain environments, but the tools for formally analyzing how
this uncertainty propagates to NN outputs are not yet commonplace. Computing
tight bounds on NN output sets (given an input set) provides a measure of
confidence associated with the NN decisions and is essential to deploy NNs on
safety-critical systems. Recent works approximate the propagation of sets
through nonlinear activations or partition the uncertainty set to provide a
guaranteed outer bound on the set of possible NN outputs. However, the bound
looseness causes excessive conservatism and/or the computation is too slow for
online analysis. This paper unifies propagation and partition approaches to
provide a family of robustness analysis algorithms that give tighter bounds
than existing works for the same amount of computation time (or reduced
computational effort for a desired accuracy level). Moreover, we provide new
partitioning techniques that are aware of their current bound estimates and
desired boundary shape (e.g., lower bounds, weighted $\ell_\infty$-ball, convex
hull), leading to further improvements in the computation-tightness tradeoff.
The paper demonstrates the tighter bounds and reduced conservatism of the
proposed robustness analysis framework with examples from model-free RL and
forward kinematics learning.
Related papers
- Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Safety Verification for Neural Networks Based on Set-boundary Analysis [5.487915758677295]
Neural networks (NNs) are increasingly applied in safety-critical systems such as autonomous vehicles.
We propose a set-boundary reachability method to investigate the safety verification problem of NNs from a topological perspective.
arXiv Detail & Related papers (2022-10-09T05:55:37Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Backward Reachability Analysis for Neural Feedback Loops [40.989393438716476]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present an algorithm to iteratively find BP set estimates over a given time horizon and demonstrate the ability to reduce conservativeness by up to 88% with low additional computational cost.
arXiv Detail & Related papers (2022-04-14T01:13:14Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Neural network training under semidefinite constraints [0.0]
This paper is concerned with the training of neural networks (NNs) under semidefinite constraints.
Semidefinite constraints can be used to verify interesting properties for NNs.
In experiments, we demonstrate the superior efficiency of our training method over previous approaches.
arXiv Detail & Related papers (2022-01-03T13:10:49Z) - Reachability Analysis of Neural Feedback Loops [34.94930611635459]
This work focuses on estimating the forward reachable set of textitneural feedback loops (closed-loop systems with NN controllers)
Recent work provides bounds on these reachable sets, but the computationally tractable approaches yield overly conservative bounds.
This work bridges the gap by formulating a convex optimization problem for the reachability analysis of closed-loop systems with NN controllers.
arXiv Detail & Related papers (2021-08-09T16:11:57Z) - Efficient Reachability Analysis of Closed-Loop Systems with Neural
Network Controllers [39.27951763459939]
This work focuses on estimating the forward reachable set of closed-loop systems with NN controllers.
Recent work provides bounds on these reachable sets, yet the computationally efficient approaches provide overly conservative bounds.
This work bridges the gap by formulating a convex optimization problem for reachability analysis for closed-loop systems with NN controllers.
arXiv Detail & Related papers (2021-01-05T22:30:39Z) - Chance-Constrained Control with Lexicographic Deep Reinforcement
Learning [77.34726150561087]
This paper proposes a lexicographic Deep Reinforcement Learning (DeepRL)-based approach to chance-constrained Markov Decision Processes.
A lexicographic version of the well-known DeepRL algorithm DQN is also proposed and validated via simulations.
arXiv Detail & Related papers (2020-10-19T13:09:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.