End-to-end Lidar-Driven Reinforcement Learning for Autonomous Racing
- URL: http://arxiv.org/abs/2309.00296v1
- Date: Fri, 1 Sep 2023 07:03:05 GMT
- Title: End-to-end Lidar-Driven Reinforcement Learning for Autonomous Racing
- Authors: Meraj Mammadov
- Abstract summary: Reinforcement Learning (RL) has emerged as a transformative approach in the domains of automation and robotics.
This study develops and trains an RL agent to navigate a racing environment solely using feedforward raw lidar and velocity data.
The agent's performance is then experimentally evaluated in a real-world racing scenario.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement Learning (RL) has emerged as a transformative approach in the
domains of automation and robotics, offering powerful solutions to complex
problems that conventional methods struggle to address. In scenarios where the
problem definitions are elusive and challenging to quantify, learning-based
solutions such as RL become particularly valuable. One instance of such
complexity can be found in the realm of car racing, a dynamic and unpredictable
environment that demands sophisticated decision-making algorithms. This study
focuses on developing and training an RL agent to navigate a racing environment
solely using feedforward raw lidar and velocity data in a simulated context.
The agent's performance, trained in the simulation environment, is then
experimentally evaluated in a real-world racing scenario. This exploration
underlines the feasibility and potential benefits of RL algorithm enhancing
autonomous racing performance, especially in the environments where prior map
information is not available.
Related papers
- Self-Driving Car Racing: Application of Deep Reinforcement Learning [0.0]
The project aims to develop an AI agent that efficiently drives a simulated car in the OpenAI Gymnasium CarRacing environment.
We investigate various RL algorithms, including Deep Q-Network (DQN), Proximal Policy Optimization (PPO), and novel adaptations that incorporate transfer learning and recurrent neural networks (RNNs) for enhanced performance.
arXiv Detail & Related papers (2024-10-30T07:32:25Z) - Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - Staged Reinforcement Learning for Complex Tasks through Decomposed
Environments [4.883558259729863]
We discuss two methods that approximate RL problems to real problems.
In the context of traffic junction simulations, we demonstrate that, if we can decompose a complex task into multiple sub-tasks, solving these tasks first can be advantageous.
From a multi-agent perspective, we introduce a training structuring mechanism that exploits the use of experience learned under the popular paradigm called Centralised Training Decentralised Execution (CTDE)
arXiv Detail & Related papers (2023-11-05T19:43:23Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Discrete Control in Real-World Driving Environments using Deep
Reinforcement Learning [2.467408627377504]
We introduce a framework (perception, planning, and control) in a real-world driving environment that transfers the real-world environments into gaming environments.
We propose variations of existing Reinforcement Learning (RL) algorithms in a multi-agent setting to learn and execute the discrete control in real-world environments.
arXiv Detail & Related papers (2022-11-29T04:24:03Z) - Accelerated Policy Learning with Parallel Differentiable Simulation [59.665651562534755]
We present a differentiable simulator and a new policy learning algorithm (SHAC)
Our algorithm alleviates problems with local minima through a smooth critic function.
We show substantial improvements in sample efficiency and wall-clock time over state-of-the-art RL and differentiable simulation-based algorithms.
arXiv Detail & Related papers (2022-04-14T17:46:26Z) - Autonomous Reinforcement Learning: Formalism and Benchmarking [106.25788536376007]
Real-world embodied learning, such as that performed by humans and animals, is situated in a continual, non-episodic world.
Common benchmark tasks in RL are episodic, with the environment resetting between trials to provide the agent with multiple attempts.
This discrepancy presents a major challenge when attempting to take RL algorithms developed for episodic simulated environments and run them on real-world platforms.
arXiv Detail & Related papers (2021-12-17T16:28:06Z) - Fast Approximate Solutions using Reinforcement Learning for Dynamic
Capacitated Vehicle Routing with Time Windows [3.5232085374661284]
This paper develops an inherently parallelised, fast, approximate learning-based solution to the generic class of Capacitated Vehicle Routing with Time Windows and Dynamic Routing (CVRP-TWDR)
Considering vehicles in a fleet as decentralised agents, we postulate that using reinforcement learning (RL) based adaptation is a key enabler for real-time route formation in a dynamic environment.
arXiv Detail & Related papers (2021-02-24T06:30:16Z) - Deep Reinforcement Learning amidst Lifelong Non-Stationarity [67.24635298387624]
We show that an off-policy RL algorithm can reason about and tackle lifelong non-stationarity.
Our method leverages latent variable models to learn a representation of the environment from current and past experiences.
We also introduce several simulation environments that exhibit lifelong non-stationarity, and empirically find that our approach substantially outperforms approaches that do not reason about environment shift.
arXiv Detail & Related papers (2020-06-18T17:34:50Z) - Deep Reinforcement Learning for Autonomous Driving: A Survey [0.3694429692322631]
This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks.
It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms.
The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
arXiv Detail & Related papers (2020-02-02T18:21:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.