DDPG car-following model with real-world human driving experience in
CARLA
- URL: http://arxiv.org/abs/2112.14602v1
- Date: Wed, 29 Dec 2021 15:22:31 GMT
- Title: DDPG car-following model with real-world human driving experience in
CARLA
- Authors: Dianzhao Li and Ostap Okhrin
- Abstract summary: We propose a two-stage Deep Reinforcement Learning (DRL) method, that learns from real-world human driving to achieve performance that is superior to the pure DRL agent.
For evaluation, we designed different real-world driving scenarios to compare the proposed two-stage DRL agent with the pure DRL agent.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the autonomous driving field, the fusion of human knowledge into Deep
Reinforcement Learning (DRL) is often based on the human demonstration recorded
in the simulated environment. This limits the generalization and the
feasibility of application in real-world traffic. We proposed a two-stage DRL
method, that learns from real-world human driving to achieve performance that
is superior to the pure DRL agent. Training a DRL agent is done within a
framework for CARLA with Robot Operating System (ROS). For evaluation, we
designed different real-world driving scenarios to compare the proposed
two-stage DRL agent with the pure DRL agent. After extracting the 'good'
behavior from the human driver, such as anticipation in a signalized
intersection, the agent becomes more efficient and drives safer, which makes
this autonomous agent more adapt to Human-Robot Interaction (HRI) traffic.
Related papers
- DR-MPC: Deep Residual Model Predictive Control for Real-world Social Navigation [20.285659649785224]
Deep Residual Model Predictive Control (DR-MPC) is a method to enable robots to safely perform DRL from real-world crowd navigation data.
DR-MPC is with MPC-based path tracking, and gradually learns to interact more effectively with humans.
In simulation, we show that DR-MPC substantially outperforms prior work, including traditional DRL and residual DRL models.
arXiv Detail & Related papers (2024-10-14T15:56:43Z) - Optimizing Autonomous Driving for Safety: A Human-Centric Approach with LLM-Enhanced RLHF [2.499371729440073]
Reinforcement Learning from Human Feedback (RLHF) is popular in large language models (LLMs)
RLHF is usually applied in the fine-tuning step, requiring direct human "preferences"
We will validate our model using data gathered from real-life testbeds located in New Jersey and New York City.
arXiv Detail & Related papers (2024-06-06T20:10:34Z) - In-context Learning for Automated Driving Scenarios [15.325910109153616]
One of the key challenges in current Reinforcement Learning (RL)-based Automated Driving (AD) agents is achieving flexible, precise, and human-like behavior cost-effectively.
This paper introduces an innovative approach utilizing Large Language Models (LLMs) to intuitively and effectively optimize RL reward functions in a human-centric way.
arXiv Detail & Related papers (2024-05-07T09:04:52Z) - Human-compatible driving partners through data-regularized self-play reinforcement learning [3.9682126792844583]
Human-Regularized PPO (HR-PPO) is a multi-agent algorithm where agents are trained through self-play with a small penalty for deviating from a human reference policy.
Results show our HR-PPO agents are highly effective in achieving goals, with a success rate of 93%, an off-road rate of 3.5%, and a collision rate of 3%.
arXiv Detail & Related papers (2024-03-28T17:56:56Z) - HAIM-DRL: Enhanced Human-in-the-loop Reinforcement Learning for Safe and Efficient Autonomous Driving [2.807187711407621]
We propose an enhanced human-in-the-loop reinforcement learning method, termed the Human as AI mentor-based deep reinforcement learning (HAIM-DRL) framework.
We first introduce an innovative learning paradigm that effectively injects human intelligence into AI, termed Human as AI mentor (HAIM)
In this paradigm, the human expert serves as a mentor to the AI agent, while the agent could be guided to minimize traffic flow disturbance.
arXiv Detail & Related papers (2024-01-06T08:30:14Z) - RACER: Rational Artificial Intelligence Car-following-model Enhanced by
Reality [51.244807332133696]
This paper introduces RACER, a cutting-edge deep learning car-following model to predict Adaptive Cruise Control (ACC) driving behavior.
Unlike conventional models, RACER effectively integrates Rational Driving Constraints (RDCs), crucial tenets of actual driving.
RACER excels across key metrics, such as acceleration, velocity, and spacing, registering zero violations.
arXiv Detail & Related papers (2023-12-12T06:21:30Z) - Studying the Impact of Semi-Cooperative Drivers on Overall Highway Flow [76.38515853201116]
Semi-cooperative behaviors are intrinsic properties of human drivers and should be considered for autonomous driving.
New autonomous planners can consider the social value orientation (SVO) of human drivers to generate socially-compliant trajectories.
We present study of implicit semi-cooperative driving where agents deploy a game-theoretic version of iterative best response.
arXiv Detail & Related papers (2023-04-23T16:01:36Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - DriverGym: Democratising Reinforcement Learning for Autonomous Driving [75.91049219123899]
We propose DriverGym, an open-source environment for developing reinforcement learning algorithms for autonomous driving.
DriverGym provides access to more than 1000 hours of expert logged data and also supports reactive and data-driven agent behavior.
The performance of an RL policy can be easily validated on real-world data using our extensive and flexible closed-loop evaluation protocol.
arXiv Detail & Related papers (2021-11-12T11:47:08Z) - Distributed Reinforcement Learning for Cooperative Multi-Robot Object
Manipulation [53.262360083572005]
We consider solving a cooperative multi-robot object manipulation task using reinforcement learning (RL)
We propose two distributed multi-agent RL approaches: distributed approximate RL (DA-RL) and game-theoretic RL (GT-RL)
Although we focus on a small system of two agents in this paper, both DA-RL and GT-RL apply to general multi-agent systems, and are expected to scale well to large systems.
arXiv Detail & Related papers (2020-03-21T00:43:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.