A Neurosymbolic Approach to the Verification of Temporal Logic
Properties of Learning enabled Control Systems
- URL: http://arxiv.org/abs/2303.05394v1
- Date: Tue, 7 Mar 2023 04:08:33 GMT
- Title: A Neurosymbolic Approach to the Verification of Temporal Logic
Properties of Learning enabled Control Systems
- Authors: Navid Hashemi, Bardh Hoxha, Tomoya Yamaguchi, Danil Prokhorov, Geogios
Fainekos, Jyotirmoy Deshmukh
- Abstract summary: We present a model for the verification of Neural Network (NN) controllers for general STL specifications.
We also propose a new approach for neural network controllers with general activation functions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Signal Temporal Logic (STL) has become a popular tool for expressing formal
requirements of Cyber-Physical Systems (CPS). The problem of verifying STL
properties of neural network-controlled CPS remains a largely unexplored
problem. In this paper, we present a model for the verification of Neural
Network (NN) controllers for general STL specifications using a custom neural
architecture where we map an STL formula into a feed-forward neural network
with ReLU activation. In the case where both our plant model and the controller
are ReLU-activated neural networks, we reduce the STL verification problem to
reachability in ReLU neural networks. We also propose a new approach for neural
network controllers with general activation functions; this approach is a sound
and complete verification approach based on computing the Lipschitz constant of
the closed-loop control system. We demonstrate the practical efficacy of our
techniques on a number of examples of learning-enabled control systems.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Verification of Neural Network Control Systems in Continuous Time [1.5695847325697108]
We develop the first verification method for continuously-actuated neural network control systems.
We accomplish this by adding a level of abstraction to model the neural network controller.
We demonstrate the approach's efficacy by applying it to a vision-based autonomous airplane taxiing system.
arXiv Detail & Related papers (2024-05-31T19:39:48Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Learning Robust and Correct Controllers from Signal Temporal Logic
Specifications Using BarrierNet [5.809331819510702]
We exploit STL quantitative semantics to define a notion of robust satisfaction.
We construct a set of trainable High Order Control Barrier Functions (HOCBFs) enforcing the satisfaction of formulas in a fragment of STL.
We train the HOCBFs together with other neural network parameters to further improve the robustness of the controller.
arXiv Detail & Related papers (2023-04-12T21:12:15Z) - Learning to Precode for Integrated Sensing and Communications Systems [11.689567114100514]
We present an unsupervised learning neural model to design transmit precoders for ISAC systems.
We show that the proposed method outperforms traditional optimization-based methods in presence of channel estimation errors.
arXiv Detail & Related papers (2023-03-11T11:24:18Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Learning in Deep Neural Networks Using a Biologically Inspired Optimizer [5.144809478361604]
We propose a novel biologically inspired for artificial (ANNs) and spiking neural networks (SNNs)
GRAPES implements a weight-distribution dependent modulation of the error signal at each node of the neural network.
We show that this biologically inspired mechanism leads to a systematic improvement of the convergence rate of the network, and substantially improves classification accuracy of ANNs and SNNs.
arXiv Detail & Related papers (2021-04-23T13:50:30Z) - Model-Based Safe Policy Search from Signal Temporal Logic Specifications
Using Recurrent Neural Networks [1.005130974691351]
We propose a policy search approach to learn controllers from specifications given as Signal Temporal Logic (STL) formulae.
The system model is unknown, and it is learned together with the control policy.
The results show that our approach can satisfy the given specification within very few system runs, and therefore it has the potential to be used for on-line control.
arXiv Detail & Related papers (2021-03-29T20:21:55Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.