Neural Controller Synthesis for Signal Temporal Logic Specifications
Using Encoder-Decoder Structured Networks
- URL: http://arxiv.org/abs/2212.05200v1
- Date: Sat, 10 Dec 2022 04:44:25 GMT
- Title: Neural Controller Synthesis for Signal Temporal Logic Specifications
Using Encoder-Decoder Structured Networks
- Authors: Wataru Hashimoto, Kazumune Hashimoto, Masako Kishida, and Shigemasa
Takai
- Abstract summary: We propose a control synthesis method for signal temporal logic (STL) specifications with neural networks (NNs)
We consider three NN structures: sequential, tree-structured, and graph-structured NNs.
All the model parameters are trained in an end-to-end manner to maximize the expected robustness that is known to be a quantitative semantics of STL formulae.
- Score: 0.7874708385247353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a control synthesis method for signal temporal
logic (STL) specifications with neural networks (NNs). Most of the previous
works consider training a controller for only a given STL specification. These
approaches, however, require retraining the NN controller if a new
specification arises and needs to be satisfied, which results in large
consumption of memory and inefficient training. To tackle this problem, we
propose to construct NN controllers by introducing encoder-decoder structured
NNs with an attention mechanism. The encoder takes an STL formula as input and
encodes it into an appropriate vector, and the decoder outputs control signals
that will meet the given specification. As the encoder, we consider three NN
structures: sequential, tree-structured, and graph-structured NNs. All the
model parameters are trained in an end-to-end manner to maximize the expected
robustness that is known to be a quantitative semantics of STL formulae. We
compare the control performances attained by the above NN structures through a
numerical experiment of the path planning problem, showing the efficacy of the
proposed approach.
Related papers
- Structured Deep Neural Network-Based Backstepping Trajectory Tracking Control for Lagrangian Systems [9.61674297336072]
The proposed controller can ensure closed-loop stability for any compatible neural network parameters.
We show that in the presence of model approximation errors and external disturbances, the closed-loop stability and tracking control performance can still be guaranteed.
arXiv Detail & Related papers (2024-03-01T09:09:37Z) - NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions [2.7086888205833968]
Field-Programmable Gate Array (FPGA) accelerators have proven successful in handling latency- and resource-critical deep neural network (DNN) inference tasks.
We propose relaxing the boundaries of neurons and mapping entire sub-networks to a single LUT.
We validate our proposed method on a known latency-critical task, jet substructure tagging, and on the classical computer vision task, digit classification using MNIST.
arXiv Detail & Related papers (2024-02-29T16:10:21Z) - Learning Robust and Correct Controllers from Signal Temporal Logic
Specifications Using BarrierNet [5.809331819510702]
We exploit STL quantitative semantics to define a notion of robust satisfaction.
We construct a set of trainable High Order Control Barrier Functions (HOCBFs) enforcing the satisfaction of formulas in a fragment of STL.
We train the HOCBFs together with other neural network parameters to further improve the robustness of the controller.
arXiv Detail & Related papers (2023-04-12T21:12:15Z) - A Neurosymbolic Approach to the Verification of Temporal Logic
Properties of Learning enabled Control Systems [0.0]
We present a model for the verification of Neural Network (NN) controllers for general STL specifications.
We also propose a new approach for neural network controllers with general activation functions.
arXiv Detail & Related papers (2023-03-07T04:08:33Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - Signal Detection in MIMO Systems with Hardware Imperfections: Message
Passing on Neural Networks [101.59367762974371]
In this paper, we investigate signal detection in multiple-input-multiple-output (MIMO) communication systems with hardware impairments.
It is difficult to train a deep neural network (DNN) with limited pilot signals, hindering its practical applications.
We design an efficient message passing based Bayesian signal detector, leveraging the unitary approximate message passing (UAMP) algorithm.
arXiv Detail & Related papers (2022-10-08T04:32:58Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Efficient Compilation and Mapping of Fixed Function Combinational Logic
onto Digital Signal Processors Targeting Neural Network Inference and
Utilizing High-level Synthesis [3.83610794195621]
Recent efforts for improving the performance of neural network (NN) accelerators have given rise to a new trend of logic-based NN inference relying on fixed function combinational logic.
This paper presents an innovative design and optimization methodology for compilation and mapping of NNs, utilizing fixed function combinational logic to DSPs on FPGAs.
arXiv Detail & Related papers (2022-07-30T20:11:59Z) - Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding [94.40747235081466]
We propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems.
We develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter.
arXiv Detail & Related papers (2021-10-22T20:49:02Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Synthetic Datasets for Neural Program Synthesis [66.20924952964117]
We propose a new methodology for controlling and evaluating the bias of synthetic data distributions over both programs and specifications.
We demonstrate, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance.
arXiv Detail & Related papers (2019-12-27T21:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.