A comparison of RL-based and PID controllers for 6-DOF swimming robots:
hybrid underwater object tracking
- URL: http://arxiv.org/abs/2401.16618v1
- Date: Mon, 29 Jan 2024 23:14:15 GMT
- Title: A comparison of RL-based and PID controllers for 6-DOF swimming robots:
hybrid underwater object tracking
- Authors: Faraz Lotfi, Khalil Virji, Nicholas Dudek, and Gregory Dudek
- Abstract summary: We present an exploration and assessment of employing a centralized deep Q-network (DQN) controller as a substitute for PID controllers.
Our primary focus centers on illustrating this transition with the specific case of underwater object tracking.
Our experiments, conducted within a Unity-based simulator, validate the effectiveness of a centralized RL agent over separated PID controllers.
- Score: 8.362739554991073
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we present an exploration and assessment of employing a
centralized deep Q-network (DQN) controller as a substitute for the prevalent
use of PID controllers in the context of 6DOF swimming robots. Our primary
focus centers on illustrating this transition with the specific case of
underwater object tracking. DQN offers advantages such as data efficiency and
off-policy learning, while remaining simpler to implement than other
reinforcement learning methods. Given the absence of a dynamic model for our
robot, we propose an RL agent to control this multi-input-multi-output (MIMO)
system, where a centralized controller may offer more robust control than
distinct PIDs. Our approach involves initially using classical controllers for
safe exploration, then gradually shifting to DQN to take full control of the
robot.
We divide the underwater tracking task into vision and control modules. We
use established methods for vision-based tracking and introduce a centralized
DQN controller. By transmitting bounding box data from the vision module to the
control module, we enable adaptation to various objects and effortless vision
system replacement. Furthermore, dealing with low-dimensional data facilitates
cost-effective online learning for the controller. Our experiments, conducted
within a Unity-based simulator, validate the effectiveness of a centralized RL
agent over separated PID controllers, showcasing the applicability of our
framework for training the underwater RL agent and improved performance
compared to traditional control methods. The code for both real and simulation
implementations is at https://github.com/FARAZLOTFI/underwater-object-tracking.
Related papers
- Communication-Control Codesign for Large-Scale Wireless Networked Control Systems [80.30532872347668]
Wireless Networked Control Systems (WNCSs) are essential to Industry 4.0, enabling flexible control in applications, such as drone swarms and autonomous robots.
We propose a practical WNCS model that captures correlated dynamics among multiple control loops with spatially distributed sensors and actuators sharing limited wireless resources over multi-state Markov block-fading channels.
We develop a Deep Reinforcement Learning (DRL) algorithm that efficiently handles the hybrid action space, captures communication-control correlations, and ensures robust training despite sparse cross-domain variables and floating control inputs.
arXiv Detail & Related papers (2024-10-15T06:28:21Z) - Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Modelling, Positioning, and Deep Reinforcement Learning Path Tracking
Control of Scaled Robotic Vehicles: Design and Experimental Validation [3.807917169053206]
Scaled robotic cars are commonly equipped with a hierarchical control acthiecture that includes tasks dedicated to vehicle state estimation and control.
This paper covers both aspects by proposing (i) a federeted extended Kalman filter (FEKF) and (ii) a novel deep reinforcement learning (DRL) path tracking controller trained via an expert demonstrator.
The experimentally validated model is used for (i) supporting the design of the FEKF and (ii) serving as a digital twin for training the proposed DRL-based path tracking algorithm.
arXiv Detail & Related papers (2024-01-10T14:40:53Z) - In-Distribution Barrier Functions: Self-Supervised Policy Filters that
Avoid Out-of-Distribution States [84.24300005271185]
We propose a control filter that wraps any reference policy and effectively encourages the system to stay in-distribution with respect to offline-collected safe demonstrations.
Our method is effective for two different visuomotor control tasks in simulation environments, including both top-down and egocentric view settings.
arXiv Detail & Related papers (2023-01-27T22:28:19Z) - Skip Training for Multi-Agent Reinforcement Learning Controller for
Industrial Wave Energy Converters [94.84709449845352]
Recent Wave Energy Converters (WEC) are equipped with multiple legs and generators to maximize energy generation.
Traditional controllers have shown limitations to capture complex wave patterns and the controllers must efficiently maximize the energy capture.
This paper introduces a Multi-Agent Reinforcement Learning controller (MARL), which outperforms the traditionally used spring damper controller.
arXiv Detail & Related papers (2022-09-13T00:20:31Z) - Deep Reinforcement Learning with Shallow Controllers: An Experimental
Application to PID Tuning [3.9146761527401424]
We demonstrate the challenges in implementing a state of the art deep RL algorithm on a real physical system.
At the core of our approach is the use of a PID controller as the trainable RL policy.
arXiv Detail & Related papers (2021-11-13T18:48:28Z) - Data-Efficient Deep Reinforcement Learning for Attitude Control of
Fixed-Wing UAVs: Field Experiments [0.37798600249187286]
We show that DRL can successfully learn to perform attitude control of a fixed-wing UAV operating directly on the original nonlinear dynamics.
We deploy the learned controller on the UAV in flight tests, demonstrating comparable performance to the state-of-the-art ArduPlane proportional-integral-derivative (PID) attitude controller.
arXiv Detail & Related papers (2021-11-07T19:07:46Z) - Machine Learning for Mechanical Ventilation Control [52.65490904484772]
We consider the problem of controlling an invasive mechanical ventilator for pressure-controlled ventilation.
A PID controller must let air in and out of a sedated patient's lungs according to a trajectory of airway pressures specified by a clinician.
We show that our controllers are able to track target pressure waveforms significantly better than PID controllers.
arXiv Detail & Related papers (2021-02-12T21:23:33Z) - Learning of Long-Horizon Sparse-Reward Robotic Manipulator Tasks with
Base Controllers [26.807673929816026]
We propose a method of learning long-horizon sparse-reward tasks utilizing one or more traditional base controllers.
Our algorithm incorporates the existing base controllers into stages of exploration, value learning, and policy update.
Our method bears the potential of leveraging existing industrial robot manipulation systems to build more flexible and intelligent controllers.
arXiv Detail & Related papers (2020-11-24T14:23:57Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z) - AirCapRL: Autonomous Aerial Human Motion Capture using Deep
Reinforcement Learning [38.429105809093116]
We introduce a deep reinforcement learning (RL) based multi-robot formation controller for the task of autonomous aerial human motion capture (MoCap)
We focus on vision-based MoCap, where the objective is to estimate the trajectory of body pose and shape a single moving person using multiple aerial vehicles.
arXiv Detail & Related papers (2020-07-13T12:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.