Trajectory Tracking of Underactuated Sea Vessels With Uncertain
Dynamics: An Integral Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2104.00190v1
- Date: Thu, 1 Apr 2021 01:41:49 GMT
- Title: Trajectory Tracking of Underactuated Sea Vessels With Uncertain
Dynamics: An Integral Reinforcement Learning Approach
- Authors: Mohammed Abouheaf, Wail Gueaieb, Md. Suruz Miah, Davide Spinello
- Abstract summary: An online machine learning mechanism based on integral reinforcement learning is proposed to find a solution for a class of nonlinear tracking problems.
The solution is implemented using an online value iteration process which is realized by employing means of the adaptive critics and gradient descent approaches.
- Score: 2.064612766965483
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Underactuated systems like sea vessels have degrees of motion that are
insufficiently matched by a set of independent actuation forces. In addition,
the underlying trajectory-tracking control problems grow in complexity in order
to decide the optimal rudder and thrust control signals. This enforces several
difficult-to-solve constraints that are associated with the error dynamical
equations using classical optimal tracking and adaptive control approaches. An
online machine learning mechanism based on integral reinforcement learning is
proposed to find a solution for a class of nonlinear tracking problems with
partial prior knowledge of the system dynamics. The actuation forces are
decided using innovative forms of temporal difference equations relevant to the
vessel's surge and angular velocities. The solution is implemented using an
online value iteration process which is realized by employing means of the
adaptive critics and gradient descent approaches. The adaptive learning
mechanism exhibited well-functioning and interactive features in react to
different desired reference-tracking scenarios.
Related papers
- Integrating DeepRL with Robust Low-Level Control in Robotic Manipulators for Non-Repetitive Reaching Tasks [0.24578723416255746]
In robotics, contemporary strategies are learning-based, characterized by a complex black-box nature and a lack of interpretability.
We propose integrating a collision-free trajectory planner based on deep reinforcement learning (DRL) with a novel auto-tuning low-level control strategy.
arXiv Detail & Related papers (2024-02-04T15:54:03Z) - DTC: Deep Tracking Control [16.2850135844455]
We propose a hybrid control architecture that combines the advantages of both worlds to achieve greater robustness, foot-placement accuracy, and terrain generalization.
A deep neural network policy is trained in simulation, aiming to track the optimized footholds.
We demonstrate superior robustness in the presence of slippery or deformable ground when compared to model-based counterparts.
arXiv Detail & Related papers (2023-09-27T07:57:37Z) - Actively Learning Reinforcement Learning: A Stochastic Optimal Control Approach [3.453622106101339]
We propose a framework towards achieving two intertwined objectives: (i) equipping reinforcement learning with active exploration and deliberate information gathering, and (ii) overcoming the computational intractability of optimal control law.
We approach both objectives by using reinforcement learning to compute the optimal control law.
Unlike fixed exploration and exploitation balance, caution and probing are employed automatically by the controller in real-time, even after the learning process is terminated.
arXiv Detail & Related papers (2023-09-18T18:05:35Z) - A Data-Driven Model-Reference Adaptive Control Approach Based on
Reinforcement Learning [4.817429789586126]
A model-reference adaptive solution is developed here for autonomous systems where it solves the Hamilton-Jacobi-Bellman equation of an error-based structure.
This is done in real-time without knowing or employing the dynamics of either the process or reference model in the control strategies.
arXiv Detail & Related papers (2023-03-17T14:10:52Z) - An Adaptive Fuzzy Reinforcement Learning Cooperative Approach for the
Autonomous Control of Flock Systems [4.961066282705832]
This work introduces an adaptive distributed robustness technique for the autonomous control of flock systems.
Its relatively flexible structure is based on online fuzzy reinforcement learning schemes which simultaneously target a number of objectives.
In addition to its resilience in the face of dynamic disturbances, the algorithm does not require more than the agent position as a feedback signal.
arXiv Detail & Related papers (2023-03-17T13:07:35Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Non-stationary Online Learning with Memory and Non-stochastic Control [71.14503310914799]
We study the problem of Online Convex Optimization (OCO) with memory, which allows loss functions to depend on past decisions.
In this paper, we introduce dynamic policy regret as the performance measure to design algorithms robust to non-stationary environments.
We propose a novel algorithm for OCO with memory that provably enjoys an optimal dynamic policy regret in terms of time horizon, non-stationarity measure, and memory length.
arXiv Detail & Related papers (2021-02-07T09:45:15Z) - Reinforcement Learning for Low-Thrust Trajectory Design of
Interplanetary Missions [77.34726150561087]
This paper investigates the use of reinforcement learning for the robust design of interplanetary trajectories in presence of severe disturbances.
An open-source implementation of the state-of-the-art algorithm Proximal Policy Optimization is adopted.
The resulting Guidance and Control Network provides both a robust nominal trajectory and the associated closed-loop guidance law.
arXiv Detail & Related papers (2020-08-19T15:22:15Z) - Robust Reinforcement Learning with Wasserstein Constraint [49.86490922809473]
We show the existence of optimal robust policies, provide a sensitivity analysis for the perturbations, and then design a novel robust learning algorithm.
The effectiveness of the proposed algorithm is verified in the Cart-Pole environment.
arXiv Detail & Related papers (2020-06-01T13:48:59Z) - Logarithmic Regret Bound in Partially Observable Linear Dynamical
Systems [91.43582419264763]
We study the problem of system identification and adaptive control in partially observable linear dynamical systems.
We present the first model estimation method with finite-time guarantees in both open and closed-loop system identification.
We show that AdaptOn is the first algorithm that achieves $textpolylogleft(Tright)$ regret in adaptive control of unknown partially observable linear dynamical systems.
arXiv Detail & Related papers (2020-03-25T06:00:33Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.