Verification of Neural Network Control Systems in Continuous Time
- URL: http://arxiv.org/abs/2406.00157v1
- Date: Fri, 31 May 2024 19:39:48 GMT
- Title: Verification of Neural Network Control Systems in Continuous Time
- Authors: Ali ArjomandBigdeli, Andrew Mata, Stanley Bak,
- Abstract summary: We develop the first verification method for continuously-actuated neural network control systems.
We accomplish this by adding a level of abstraction to model the neural network controller.
We demonstrate the approach's efficacy by applying it to a vision-based autonomous airplane taxiing system.
- Score: 1.5695847325697108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network controllers are currently being proposed for use in many safety-critical tasks. Most analysis methods for neural network control systems assume a fixed control period. In control theory, higher frequency usually improves performance. However, for current analysis methods, increasing the frequency complicates verification. In the limit, when actuation is performed continuously, no existing neural network control systems verification methods are able to analyze the system. In this work, we develop the first verification method for continuously-actuated neural network control systems. We accomplish this by adding a level of abstraction to model the neural network controller. The abstraction is a piecewise linear model with added noise to account for local linearization error. The soundness of the abstraction can be checked using open-loop neural network verification tools, although we demonstrate bottlenecks in existing tools when handling the required specifications. We demonstrate the approach's efficacy by applying it to a vision-based autonomous airplane taxiing system and compare with a fixed frequency analysis baseline.
Related papers
- Identification For Control Based on Neural Networks: Approximately Linearizable Models [42.15267357325546]
This work presents a control-oriented identification scheme for efficient control design and stability analysis of nonlinear systems.
Neural networks are used to identify a discrete-time nonlinear state-space model to approximate time-domain input-output behavior.
The network is constructed such that the identified model is approximately linearizable by feedback, ensuring that the control law trivially follows from the learning stage.
arXiv Detail & Related papers (2024-09-24T08:31:22Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Observer-Feedback-Feedforward Controller Structures in Reinforcement
Learning [0.0]
The paper proposes the use of structured neural networks for reinforcement learning based nonlinear adaptive control.
The focus is on partially observable systems, with separate neural networks for the state and feedforward observer and the state feedback and feedforward controller.
arXiv Detail & Related papers (2023-04-20T12:59:21Z) - A Neurosymbolic Approach to the Verification of Temporal Logic
Properties of Learning enabled Control Systems [0.0]
We present a model for the verification of Neural Network (NN) controllers for general STL specifications.
We also propose a new approach for neural network controllers with general activation functions.
arXiv Detail & Related papers (2023-03-07T04:08:33Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Online Detection of Vibration Anomalies Using Balanced Spiking Neural
Networks [2.9439848714137447]
We propose a neuromorphic approach to perform vibration analysis using spiking neural networks.
We present a spike-based end-to-end pipeline able to detect system anomalies from vibration data.
We show that the proposed method achieves state-of-the-art performance or better against two publicly available data sets.
arXiv Detail & Related papers (2021-06-01T18:00:02Z) - Discrete-time Contraction-based Control of Nonlinear Systems with
Parametric Uncertainties using Neural Networks [6.804154699470765]
This work develops an approach to discrete-time contraction analysis and control using neural networks.
The methodology involves training a neural network to learn a contraction metric and feedback gain.
The resulting contraction-based controller embeds the trained neural network and is capable of achieving efficient tracking of time-varying references.
arXiv Detail & Related papers (2021-05-12T05:07:34Z) - Performance Bounds for Neural Network Estimators: Applications in Fault
Detection [2.388501293246858]
We exploit recent results in quantifying the robustness of neural networks to construct and tune a model-based anomaly detector.
In tuning, we specifically provide upper bounds on the rate of false alarms expected under normal operation.
arXiv Detail & Related papers (2021-03-22T19:23:08Z) - A Novel Anomaly Detection Algorithm for Hybrid Production Systems based
on Deep Learning and Timed Automata [73.38551379469533]
DAD:DeepAnomalyDetection is a new approach for automatic model learning and anomaly detection in hybrid production systems.
It combines deep learning and timed automata for creating behavioral model from observations.
The algorithm has been applied to few data sets including two from real systems and has shown promising results.
arXiv Detail & Related papers (2020-10-29T08:27:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.