Discrete Control in Real-World Driving Environments using Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2211.15920v2
- Date: Wed, 30 Nov 2022 17:25:23 GMT
- Title: Discrete Control in Real-World Driving Environments using Deep
Reinforcement Learning
- Authors: Avinash Amballa, Advaith P., Pradip Sasmal, and Sumohana Channappayya
- Abstract summary: We introduce a framework (perception, planning, and control) in a real-world driving environment that transfers the real-world environments into gaming environments.
We propose variations of existing Reinforcement Learning (RL) algorithms in a multi-agent setting to learn and execute the discrete control in real-world environments.
- Score: 2.467408627377504
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Training self-driving cars is often challenging since they require a vast
amount of labeled data in multiple real-world contexts, which is
computationally and memory intensive. Researchers often resort to driving
simulators to train the agent and transfer the knowledge to a real-world
setting. Since simulators lack realistic behavior, these methods are quite
inefficient. To address this issue, we introduce a framework (perception,
planning, and control) in a real-world driving environment that transfers the
real-world environments into gaming environments by setting up a reliable
Markov Decision Process (MDP). We propose variations of existing Reinforcement
Learning (RL) algorithms in a multi-agent setting to learn and execute the
discrete control in real-world environments. Experiments show that the
multi-agent setting outperforms the single-agent setting in all the scenarios.
We also propose reliable initialization, data augmentation, and training
techniques that enable the agents to learn and generalize to navigate in a
real-world environment with minimal input video data, and with minimal
training. Additionally, to show the efficacy of our proposed algorithm, we
deploy our method in the virtual driving environment TORCS.
Related papers
- Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Sim-to-Real Transfer of Deep Reinforcement Learning Agents for Online Coverage Path Planning [15.792914346054502]
We tackle the challenge of sim-to-real transfer of reinforcement learning (RL) agents for coverage path planning ( CPP)
We bridge the sim-to-real gap through a semi-virtual environment, including a real robot and real-time aspects, while utilizing a simulated sensor and obstacles.
We find that a high inference frequency allows first-order Markovian policies to transfer directly from simulation, while higher-order policies can be fine-tuned to further reduce the sim-to-real gap.
arXiv Detail & Related papers (2024-06-07T13:24:19Z) - Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving
Without Real Data [56.49494318285391]
We present Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving.
This is done by learning to translate randomized simulation images into simulated segmentation and depth maps.
This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world.
arXiv Detail & Related papers (2022-10-25T17:50:36Z) - A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free
Reinforcement Learning [86.06110576808824]
Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments.
Recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped in only 20 minutes in the real world.
arXiv Detail & Related papers (2022-08-16T17:37:36Z) - Data generation using simulation technology to improve perception
mechanism of autonomous vehicles [0.0]
We will demonstrate the effectiveness of combining data gathered from the real world with data generated in the simulated world to train perception systems.
We will also propose a multi-level deep learning perception framework that aims to emulate a human learning experience.
arXiv Detail & Related papers (2022-07-01T03:42:33Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - Learning to Simulate on Sparse Trajectory Data [26.718807213824853]
We present a novel framework ImInGAIL to address the problem of learning to simulate the driving behavior from sparse real-world data.
To the best of our knowledge, we are the first to tackle the data sparsity issue for behavior learning problems.
arXiv Detail & Related papers (2021-03-22T13:42:11Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z) - Robust Reinforcement Learning-based Autonomous Driving Agent for
Simulation and Real World [0.0]
We present a DRL-based algorithm that is capable of performing autonomous robot control using Deep Q-Networks (DQN)
In our approach, the agent is trained in a simulated environment and it is able to navigate both in a simulated and real-world environment.
The trained agent is able to run on limited hardware resources and its performance is comparable to state-of-the-art approaches.
arXiv Detail & Related papers (2020-09-23T15:23:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.