Verification of Neural-Network Control Systems by Integrating Taylor
Models and Zonotopes
- URL: http://arxiv.org/abs/2112.09197v1
- Date: Thu, 16 Dec 2021 20:46:39 GMT
- Title: Verification of Neural-Network Control Systems by Integrating Taylor
Models and Zonotopes
- Authors: Christian Schilling, Marcelo Forets, Sebastian Guadalupe
- Abstract summary: We study the verification problem for closed-loop dynamical systems with neural-network controllers (NNCS)
We present an algorithm to chain approaches based on Taylor models and zonotopes, yielding a precise reachability algorithm for NNCS.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We study the verification problem for closed-loop dynamical systems with
neural-network controllers (NNCS). This problem is commonly reduced to
computing the set of reachable states. When considering dynamical systems and
neural networks in isolation, there exist precise approaches for that task
based on set representations respectively called Taylor models and zonotopes.
However, the combination of these approaches to NNCS is non-trivial because,
when converting between the set representations, dependency information gets
lost in each control cycle and the accumulated approximation error quickly
renders the result useless. We present an algorithm to chain approaches based
on Taylor models and zonotopes, yielding a precise reachability algorithm for
NNCS. Because the algorithm only acts at the interface of the isolated
approaches, it is applicable to general dynamical systems and neural networks
and can benefit from future advances in these areas. Our implementation
delivers state-of-the-art performance and is the first to successfully analyze
all benchmark problems of an annual reachability competition for NNCS.
Related papers
- Interacting Particle Systems on Networks: joint inference of the network
and the interaction kernel [8.535430501710712]
We infer the weight matrix of the network and systems which determine the rules of the interactions between agents.
We use two algorithms: one is on a new algorithm named operator regression with alternating least squares of data.
Both algorithms are scalable conditions guaranteeing identifiability and well-posedness.
arXiv Detail & Related papers (2024-02-13T12:29:38Z) - Model-Based Control with Sparse Neural Dynamics [23.961218902837807]
We propose a new framework for integrated model learning and predictive control.
We show that our framework can deliver better closed-loop performance than existing state-of-the-art methods.
arXiv Detail & Related papers (2023-12-20T06:25:02Z) - SICNN: Soft Interference Cancellation Inspired Neural Network Equalizers [1.6451639748812472]
We propose a novel neural network (NN)-based approach, referred to as SICNN.
SICNN is designed by deep unfolding a model-based iterative soft interference cancellation (SIC) method.
We compare the bit error ratio performance of the proposed NN-based equalizers with state-of-the-art model-based and NN-based approaches.
arXiv Detail & Related papers (2023-08-24T06:40:54Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Interval Reachability of Nonlinear Dynamical Systems with Neural Network
Controllers [5.543220407902113]
This paper proposes a computationally efficient framework, based on interval analysis, for rigorous verification of nonlinear continuous-time dynamical systems with neural network controllers.
Inspired by mixed monotone theory, we embed the closed-loop dynamics into a larger system using an inclusion function of the neural network and a decomposition function of the open-loop system.
We show that one can efficiently compute hyper-rectangular over-approximations of the reachable sets using a single trajectory of the embedding system.
arXiv Detail & Related papers (2023-01-19T06:46:36Z) - Backpropagation on Dynamical Networks [0.0]
We propose a network inference method based on the backpropagation through time (BPTT) algorithm commonly used to train recurrent neural networks.
An approximation of local node dynamics is first constructed using a neural network.
Freerun prediction performance with the resulting local models and weights was found to be comparable to the true system.
arXiv Detail & Related papers (2022-07-07T05:22:44Z) - Large-Scale Sequential Learning for Recommender and Engineering Systems [91.3755431537592]
In this thesis, we focus on the design of an automatic algorithms that provide personalized ranking by adapting to the current conditions.
For the former, we propose novel algorithm called SAROS that take into account both kinds of feedback for learning over the sequence of interactions.
The proposed idea of taking into account the neighbour lines shows statistically significant results in comparison with the initial approach for faults detection in power grid.
arXiv Detail & Related papers (2022-05-13T21:09:41Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.