Model-Reference Reinforcement Learning for Collision-Free Tracking
Control of Autonomous Surface Vehicles
- URL: http://arxiv.org/abs/2008.07240v1
- Date: Mon, 17 Aug 2020 12:15:15 GMT
- Title: Model-Reference Reinforcement Learning for Collision-Free Tracking
Control of Autonomous Surface Vehicles
- Authors: Qingrui Zhang and Wei Pan and Vasso Reppa
- Abstract summary: The proposed control algorithm combines a conventional control method with reinforcement learning to enhance control accuracy and intelligence.
Thanks to reinforcement learning, the overall tracking controller is capable of compensating for model uncertainties and achieving collision avoidance.
- Score: 1.7033108359337459
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a novel model-reference reinforcement learning algorithm
for the intelligent tracking control of uncertain autonomous surface vehicles
with collision avoidance. The proposed control algorithm combines a
conventional control method with reinforcement learning to enhance control
accuracy and intelligence. In the proposed control design, a nominal system is
considered for the design of a baseline tracking controller using a
conventional control approach. The nominal system also defines the desired
behaviour of uncertain autonomous surface vehicles in an obstacle-free
environment. Thanks to reinforcement learning, the overall tracking controller
is capable of compensating for model uncertainties and achieving collision
avoidance at the same time in environments with obstacles. In comparison to
traditional deep reinforcement learning methods, our proposed learning-based
control can provide stability guarantees and better sample efficiency. We
demonstrate the performance of the new algorithm using an example of autonomous
surface vehicles.
Related papers
- Integrating DeepRL with Robust Low-Level Control in Robotic Manipulators for Non-Repetitive Reaching Tasks [0.24578723416255746]
In robotics, contemporary strategies are learning-based, characterized by a complex black-box nature and a lack of interpretability.
We propose integrating a collision-free trajectory planner based on deep reinforcement learning (DRL) with a novel auto-tuning low-level control strategy.
arXiv Detail & Related papers (2024-02-04T15:54:03Z) - Reinforcement Learning with Model Predictive Control for Highway Ramp Metering [14.389086937116582]
This work explores the synergy between model-based and learning-based strategies to enhance traffic flow management.
The control problem is formulated as an RL task by crafting a suitable stage cost function.
An MPC-based RL approach, which leverages the MPC optimal problem as a function approximation for the RL algorithm, is proposed to learn to efficiently control an on-ramp.
arXiv Detail & Related papers (2023-11-15T09:50:54Z) - Tuning Legged Locomotion Controllers via Safe Bayesian Optimization [47.87675010450171]
This paper presents a data-driven strategy to streamline the deployment of model-based controllers in legged robotic hardware platforms.
We leverage a model-free safe learning algorithm to automate the tuning of control gains, addressing the mismatch between the simplified model used in the control formulation and the real system.
arXiv Detail & Related papers (2023-06-12T13:10:14Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Learning Variable Impedance Control for Aerial Sliding on Uneven
Heterogeneous Surfaces by Proprioceptive and Tactile Sensing [42.27572349747162]
We present a learning-based adaptive control strategy for aerial sliding tasks.
The proposed controller structure combines data-driven and model-based control methods.
Compared to fine-tuned state of the art interaction control methods we achieve reduced tracking error and improved disturbance rejection.
arXiv Detail & Related papers (2022-06-28T16:28:59Z) - Closing the Closed-Loop Distribution Shift in Safe Imitation Learning [80.05727171757454]
We treat safe optimization-based control strategies as experts in an imitation learning problem.
We train a learned policy that can be cheaply evaluated at run-time and that provably satisfies the same safety guarantees as the expert.
arXiv Detail & Related papers (2021-02-18T05:11:41Z) - Learning-based vs Model-free Adaptive Control of a MAV under Wind Gust [0.2770822269241973]
Navigation problems under unknown varying conditions are among the most important and well-studied problems in the control field.
Recent model-free adaptive control methods aim at removing this dependency by learning the physical characteristics of the plant directly from sensor feedback.
We propose a conceptually simple learning-based approach composed of a full state feedback controller, tuned robustly by a deep reinforcement learning framework.
arXiv Detail & Related papers (2021-01-29T10:13:56Z) - Reinforcement Learning for Low-Thrust Trajectory Design of
Interplanetary Missions [77.34726150561087]
This paper investigates the use of reinforcement learning for the robust design of interplanetary trajectories in presence of severe disturbances.
An open-source implementation of the state-of-the-art algorithm Proximal Policy Optimization is adopted.
The resulting Guidance and Control Network provides both a robust nominal trajectory and the associated closed-loop guidance law.
arXiv Detail & Related papers (2020-08-19T15:22:15Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z) - Model-Reference Reinforcement Learning Control of Autonomous Surface
Vehicles with Uncertainties [1.7033108359337459]
The proposed control combines a conventional control method with deep reinforcement learning.
With the reinforcement learning, we can directly learn a control law to compensate for modeling uncertainties.
In comparison with traditional deep reinforcement learning methods, our proposed learning-based control can provide stability guarantees and better sample efficiency.
arXiv Detail & Related papers (2020-03-30T22:02:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.