Generating Probabilistic Safety Guarantees for Neural Network
Controllers
- URL: http://arxiv.org/abs/2103.01203v1
- Date: Mon, 1 Mar 2021 18:48:21 GMT
- Title: Generating Probabilistic Safety Guarantees for Neural Network
Controllers
- Authors: Sydney M. Katz, Kyle D. Julian, Christopher A. Strong, Mykel J.
Kochenderfer
- Abstract summary: We use a dynamics model to determine the output properties that must hold for a neural network controller to operate safely.
We develop an adaptive verification approach to efficiently generate an overapproximation of the neural network policy.
We show that our method is able to generate meaningful probabilistic safety guarantees for aircraft collision avoidance neural networks.
- Score: 30.34898838361206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks serve as effective controllers in a variety of complex
settings due to their ability to represent expressive policies. The complex
nature of neural networks, however, makes their output difficult to verify and
predict, which limits their use in safety-critical applications. While
simulations provide insight into the performance of neural network controllers,
they are not enough to guarantee that the controller will perform safely in all
scenarios. To address this problem, recent work has focused on formal methods
to verify properties of neural network outputs. For neural network controllers,
we can use a dynamics model to determine the output properties that must hold
for the controller to operate safely. In this work, we develop a method to use
the results from neural network verification tools to provide probabilistic
safety guarantees on a neural network controller. We develop an adaptive
verification approach to efficiently generate an overapproximation of the
neural network policy. Next, we modify the traditional formulation of Markov
decision process (MDP) model checking to provide guarantees on the
overapproximated policy given a stochastic dynamics model. Finally, we
incorporate techniques in state abstraction to reduce overapproximation error
during the model checking process. We show that our method is able to generate
meaningful probabilistic safety guarantees for aircraft collision avoidance
neural networks that are loosely inspired by Airborne Collision Avoidance
System X (ACAS X), a family of collision avoidance systems that formulates the
problem as a partially observable Markov decision process (POMDP).
Related papers
- Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Probabilistic Reach-Avoid for Bayesian Neural Networks [71.67052234622781]
We show that an optimal synthesis algorithm can provide more than a four-fold increase in the number of certifiable states.
The algorithm is able to provide more than a three-fold increase in the average guaranteed reach-avoid probability.
arXiv Detail & Related papers (2023-10-03T10:52:21Z) - Distributionally Robust Statistical Verification with Imprecise Neural
Networks [4.094049541486327]
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems.
This paper proposes a novel approach based on a combination of active learning, uncertainty quantification, and neural network verification.
arXiv Detail & Related papers (2023-08-28T18:06:24Z) - Safety Verification of Neural Network Control Systems Using Guaranteed
Neural Network Model Reduction [3.0839245814393728]
A concept of model reduction precision is proposed to describe the guaranteed distance between the outputs of a neural network and its reduced-size version.
A reachability-based algorithm is proposed to accurately compute the model reduction precision.
An algorithm to compute the reachable set of the original system is developed, which is able to support much more computationally efficient safety verification processes.
arXiv Detail & Related papers (2023-01-17T03:25:57Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - RoMA: a Method for Neural Network Robustness Measurement and Assessment [0.0]
We present a new statistical method, called Robustness Measurement and Assessment (RoMA)
RoMA determines the probability that a random input perturbation might cause misclassification.
One interesting insight obtained through this work is that, in a classification network, different output labels can exhibit very different robustness levels.
arXiv Detail & Related papers (2021-10-21T12:01:54Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Towards Repairing Neural Networks Correctly [6.600380575920419]
We propose a runtime verification method to ensure the correctness of neural networks.
Experiment results show that our approach effectively generates neural networks which are guaranteed to satisfy the properties.
arXiv Detail & Related papers (2020-12-03T12:31:07Z) - Safety Verification of Neural Network Controlled Systems [0.0]
We propose a system-level approach for verifying the safety of neural network controlled systems.
We assume a generic model for the controller that can capture both simple and complex behaviours.
We perform a reachability analysis that soundly approximates the reachable states of the overall system.
arXiv Detail & Related papers (2020-11-10T15:26:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.