Neural Operators for Boundary Stabilization of Stop-and-go Traffic
- URL: http://arxiv.org/abs/2312.10374v1
- Date: Sat, 16 Dec 2023 08:18:39 GMT
- Title: Neural Operators for Boundary Stabilization of Stop-and-go Traffic
- Authors: Yihuai Zhang, Ruiguo Zhong, Huan Yu
- Abstract summary: This paper introduces a novel approach to PDE boundary control design using neural operators.
We present two distinct neural operator learning schemes aimed at stabilizing the traffic PDE system.
It is proved that the NO-based closed-loop system is practical stable under certain approximation accuracy conditions in NO-learning.
- Score: 1.90298817989995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a novel approach to PDE boundary control design using
neural operators to alleviate stop-and-go instabilities in congested traffic
flow. Our framework leverages neural operators to design control strategies for
traffic flow systems. The traffic dynamics are described by the Aw-Rascle-Zhang
(ARZ) model, which comprises a set of second-order coupled hyperbolic partial
differential equations (PDEs). Backstepping method is widely used for boundary
control of such PDE systems. The PDE model-based control design can be
time-consuming and require intensive depth of expertise since it involves
constructing and solving backstepping control kernels. To overcome these
challenges, we present two distinct neural operator (NO) learning schemes aimed
at stabilizing the traffic PDE system. The first scheme embeds NO-approximated
gain kernels within a predefined backstepping controller, while the second one
directly learns a boundary control law. The Lyapunov analysis is conducted to
evaluate the stability of the NO-approximated gain kernels and control law. It
is proved that the NO-based closed-loop system is practical stable under
certain approximation accuracy conditions in NO-learning. To validate the
efficacy of the proposed approach, simulations are conducted to compare the
performance of the two neural operator controllers with a PDE backstepping
controller and a Proportional Integral (PI) controller. While the
NO-approximated methods exhibit higher errors compared to the backstepping
controller, they consistently outperform the PI controller, demonstrating
faster computation speeds across all scenarios. This result suggests that
neural operators can significantly expedite and simplify the process of
obtaining boundary controllers in traffic PDE systems.
Related papers
- Adaptive control of reaction-diffusion PDEs via neural operator-approximated gain kernels [3.3044728148521623]
Neural operator approximations of the gain kernels in PDE backstepping have emerged as a viable method for implementing controllers in real time.
We extend the neural operator methodology from adaptive control of a hyperbolic PDE to adaptive control of a benchmark parabolic PDE.
We prove global stability and regulation of the plant state for a Lyapunov design of parameter adaptation.
arXiv Detail & Related papers (2024-07-01T19:24:36Z) - Structured Deep Neural Network-Based Backstepping Trajectory Tracking Control for Lagrangian Systems [9.61674297336072]
The proposed controller can ensure closed-loop stability for any compatible neural network parameters.
We show that in the presence of model approximation errors and external disturbances, the closed-loop stability and tracking control performance can still be guaranteed.
arXiv Detail & Related papers (2024-03-01T09:09:37Z) - Integrating DeepRL with Robust Low-Level Control in Robotic Manipulators for Non-Repetitive Reaching Tasks [0.24578723416255746]
In robotics, contemporary strategies are learning-based, characterized by a complex black-box nature and a lack of interpretability.
We propose integrating a collision-free trajectory planner based on deep reinforcement learning (DRL) with a novel auto-tuning low-level control strategy.
arXiv Detail & Related papers (2024-02-04T15:54:03Z) - Sub-linear Regret in Adaptive Model Predictive Control [56.705978425244496]
We present STT-MPC (Self-Tuning Tube-based Model Predictive Control), an online oracle that combines the certainty-equivalence principle and polytopic tubes.
We analyze the regret of the algorithm, when compared to an algorithm initially aware of the system dynamics.
arXiv Detail & Related papers (2023-10-07T15:07:10Z) - Steady-State Error Compensation in Reference Tracking and Disturbance
Rejection Problems for Reinforcement Learning-Based Control [0.9023847175654602]
Reinforcement learning (RL) is a promising, upcoming topic in automatic control applications.
Initiative action state augmentation (IASA) for actor-critic-based RL controllers is introduced.
This augmentation does not require any expert knowledge, leaving the approach model free.
arXiv Detail & Related papers (2022-01-31T16:29:19Z) - Finite-time System Identification and Adaptive Control in Autoregressive
Exogenous Systems [79.67879934935661]
We study the problem of system identification and adaptive control of unknown ARX systems.
We provide finite-time learning guarantees for the ARX systems under both open-loop and closed-loop data collection.
arXiv Detail & Related papers (2021-08-26T18:00:00Z) - Regret-optimal Estimation and Control [52.28457815067461]
We show that the regret-optimal estimator and regret-optimal controller can be derived in state-space form.
We propose regret-optimal analogs of Model-Predictive Control (MPC) and the Extended KalmanFilter (EKF) for systems with nonlinear dynamics.
arXiv Detail & Related papers (2021-06-22T23:14:21Z) - Control of Stochastic Quantum Dynamics with Differentiable Programming [0.0]
We propose a framework for the automated design of control schemes based on differentiable programming.
We apply this approach to state preparation and stabilization of a qubit subjected to homodyne detection.
Despite the resulting poor signal-to-noise ratio, we can train our controller to prepare and stabilize the qubit to a target state with a mean fidelity around 85%.
arXiv Detail & Related papers (2021-01-04T19:00:03Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Learning Stabilizing Controllers for Unstable Linear Quadratic
Regulators from a Single Trajectory [85.29718245299341]
We study linear controllers under quadratic costs model also known as linear quadratic regulators (LQR)
We present two different semi-definite programs (SDP) which results in a controller that stabilizes all systems within an ellipsoid uncertainty set.
We propose an efficient data dependent algorithm -- textsceXploration -- that with high probability quickly identifies a stabilizing controller.
arXiv Detail & Related papers (2020-06-19T08:58:57Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.