Safety Verification of Neural Network Controlled Systems
- URL: http://arxiv.org/abs/2011.05174v1
- Date: Tue, 10 Nov 2020 15:26:38 GMT
- Title: Safety Verification of Neural Network Controlled Systems
- Authors: Arthur Clavi\`ere, Eric Asselin, Christophe Garion (ISAE-SUPAERO),
Claire Pagetti (ANITI)
- Abstract summary: We propose a system-level approach for verifying the safety of neural network controlled systems.
We assume a generic model for the controller that can capture both simple and complex behaviours.
We perform a reachability analysis that soundly approximates the reachable states of the overall system.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a system-level approach for verifying the safety of
neural network controlled systems, combining a continuous-time physical system
with a discrete-time neural network based controller. We assume a generic model
for the controller that can capture both simple and complex behaviours
involving neural networks. Based on this model, we perform a reachability
analysis that soundly approximates the reachable states of the overall system,
allowing to achieve a formal proof of safety. To this end, we leverage both
validated simulation to approximate the behaviour of the physical system and
abstract interpretation to approximate the behaviour of the controller. We
evaluate the applicability of our approach using a real-world use case.
Moreover, we show that our approach can provide valuable information when the
system cannot be proved totally safe.
Related papers
- Verification of Neural Network Control Systems in Continuous Time [1.5695847325697108]
We develop the first verification method for continuously-actuated neural network control systems.
We accomplish this by adding a level of abstraction to model the neural network controller.
We demonstrate the approach's efficacy by applying it to a vision-based autonomous airplane taxiing system.
arXiv Detail & Related papers (2024-05-31T19:39:48Z) - Distributionally Robust Statistical Verification with Imprecise Neural
Networks [4.094049541486327]
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems.
This paper proposes a novel approach based on a combination of active learning, uncertainty quantification, and neural network verification.
arXiv Detail & Related papers (2023-08-28T18:06:24Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - In-Distribution Barrier Functions: Self-Supervised Policy Filters that
Avoid Out-of-Distribution States [84.24300005271185]
We propose a control filter that wraps any reference policy and effectively encourages the system to stay in-distribution with respect to offline-collected safe demonstrations.
Our method is effective for two different visuomotor control tasks in simulation environments, including both top-down and egocentric view settings.
arXiv Detail & Related papers (2023-01-27T22:28:19Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Controllability of Coarsely Measured Networked Linear Dynamical Systems
(Extended Version) [19.303541162361746]
We consider the controllability of large-scale linear networked dynamical systems when complete knowledge of network structure is unavailable.
We provide conditions under which average controllability of the fine-scale system can be well approximated by average controllability of the (synthesized, reduced-order) coarse-scale system.
arXiv Detail & Related papers (2022-06-21T17:50:09Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Formal Verification of Stochastic Systems with ReLU Neural Network
Controllers [22.68044012584378]
We address the problem of formal safety verification for cyber-physical systems equipped with ReLU neural network (NN) controllers.
Our goal is to find the set of initial states from where, with a predetermined confidence, the system will not reach an unsafe configuration.
arXiv Detail & Related papers (2021-03-08T23:53:13Z) - Generating Probabilistic Safety Guarantees for Neural Network
Controllers [30.34898838361206]
We use a dynamics model to determine the output properties that must hold for a neural network controller to operate safely.
We develop an adaptive verification approach to efficiently generate an overapproximation of the neural network policy.
We show that our method is able to generate meaningful probabilistic safety guarantees for aircraft collision avoidance neural networks.
arXiv Detail & Related papers (2021-03-01T18:48:21Z) - Firearm Detection and Segmentation Using an Ensemble of Semantic Neural
Networks [62.997667081978825]
We present a weapon detection system based on an ensemble of semantic Convolutional Neural Networks.
A set of simpler neural networks dedicated to specific tasks requires less computational resources and can be trained in parallel.
The overall output of the system given by the aggregation of the outputs of individual networks can be tuned by a user to trade-off false positives and false negatives.
arXiv Detail & Related papers (2020-02-11T13:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.