Modelling, Positioning, and Deep Reinforcement Learning Path Tracking
Control of Scaled Robotic Vehicles: Design and Experimental Validation
- URL: http://arxiv.org/abs/2401.05194v1
- Date: Wed, 10 Jan 2024 14:40:53 GMT
- Title: Modelling, Positioning, and Deep Reinforcement Learning Path Tracking
Control of Scaled Robotic Vehicles: Design and Experimental Validation
- Authors: Carmine Caponio, Pietro Stano, Raffaele Carli, Ignazio Olivieri,
Daniele Ragone, Aldo Sorniotti and Umberto Montanaro
- Abstract summary: Scaled robotic cars are commonly equipped with a hierarchical control acthiecture that includes tasks dedicated to vehicle state estimation and control.
This paper covers both aspects by proposing (i) a federeted extended Kalman filter (FEKF) and (ii) a novel deep reinforcement learning (DRL) path tracking controller trained via an expert demonstrator.
The experimentally validated model is used for (i) supporting the design of the FEKF and (ii) serving as a digital twin for training the proposed DRL-based path tracking algorithm.
- Score: 3.807917169053206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile robotic systems are becoming increasingly popular. These systems are
used in various indoor applications, raging from warehousing and manufacturing
to test benches for assessment of advanced control strategies, such as
artificial intelligence (AI)-based control solutions, just to name a few.
Scaled robotic cars are commonly equipped with a hierarchical control
acthiecture that includes tasks dedicated to vehicle state estimation and
control. This paper covers both aspects by proposing (i) a federeted extended
Kalman filter (FEKF), and (ii) a novel deep reinforcement learning (DRL) path
tracking controller trained via an expert demonstrator to expedite the learning
phase and increase robustess to the simulation-to-reality gap. The paper also
presents the formulation of a vehicle model along with an effective yet simple
procedure for identifying tis paramters. The experimentally validated model is
used for (i) supporting the design of the FEKF and (ii) serving as a digital
twin for training the proposed DRL-based path tracking algorithm. Experimental
results confirm the ability of the FEKF to improve the estimate of the mobile
robot's position. Furthermore, the effectiveness of the DRL path tracking
strateguy is experimentally tested along manoeuvres not considered during
training, showing also the ability of the AI-based solution to outpeform
model-based control strategies and the demonstrator. The comparison with
benchmraking controllers is quantitavely evalueted through a set of key
performance indicators.
Related papers
- A comparison of RL-based and PID controllers for 6-DOF swimming robots:
hybrid underwater object tracking [8.362739554991073]
We present an exploration and assessment of employing a centralized deep Q-network (DQN) controller as a substitute for PID controllers.
Our primary focus centers on illustrating this transition with the specific case of underwater object tracking.
Our experiments, conducted within a Unity-based simulator, validate the effectiveness of a centralized RL agent over separated PID controllers.
arXiv Detail & Related papers (2024-01-29T23:14:15Z) - Data-efficient Deep Reinforcement Learning for Vehicle Trajectory
Control [6.144517901919656]
Reinforcement learning (RL) promises to achieve control performance superior to classical approaches.
Standard RL approaches like soft-actor critic (SAC) require extensive amounts of training data to be collected.
We apply recently developed data-efficient deep RL methods to vehicle trajectory control.
arXiv Detail & Related papers (2023-11-30T09:38:59Z) - Learning to Fly in Seconds [7.259696592534715]
We show how curriculum learning and a highly optimized simulator enhance sample complexity and lead to fast training times.
Our framework enables Simulation-to-Reality (Sim2Real) transfer for direct control after only 18 seconds of training on a consumer-grade laptop.
arXiv Detail & Related papers (2023-11-22T01:06:45Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Self-Inspection Method of Unmanned Aerial Vehicles in Power Plants Using
Deep Q-Network Reinforcement Learning [0.0]
The research proposes a power plant inspection system incorporating UAV autonomous navigation and DQN reinforcement learning.
The trained model makes it more likely that the inspection strategy will be applied in practice by enabling the UAV to move around on its own in difficult environments.
arXiv Detail & Related papers (2023-03-16T00:58:50Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Constrained Reinforcement Learning for Robotics via Scenario-Based
Programming [64.07167316957533]
It is crucial to optimize the performance of DRL-based agents while providing guarantees about their behavior.
This paper presents a novel technique for incorporating domain-expert knowledge into a constrained DRL training loop.
Our experiments demonstrate that using our approach to leverage expert knowledge dramatically improves the safety and the performance of the agent.
arXiv Detail & Related papers (2022-06-20T07:19:38Z) - Training and Evaluation of Deep Policies using Reinforcement Learning
and Generative Models [67.78935378952146]
GenRL is a framework for solving sequential decision-making problems.
It exploits the combination of reinforcement learning and latent variable generative models.
We experimentally determine the characteristics of generative models that have most influence on the performance of the final policy training.
arXiv Detail & Related papers (2022-04-18T22:02:32Z) - Data-Efficient Deep Reinforcement Learning for Attitude Control of
Fixed-Wing UAVs: Field Experiments [0.37798600249187286]
We show that DRL can successfully learn to perform attitude control of a fixed-wing UAV operating directly on the original nonlinear dynamics.
We deploy the learned controller on the UAV in flight tests, demonstrating comparable performance to the state-of-the-art ArduPlane proportional-integral-derivative (PID) attitude controller.
arXiv Detail & Related papers (2021-11-07T19:07:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.