Bilateral Deep Reinforcement Learning Approach for Better-than-human Car
Following Model
- URL: http://arxiv.org/abs/2203.04749v1
- Date: Thu, 3 Mar 2022 17:23:36 GMT
- Title: Bilateral Deep Reinforcement Learning Approach for Better-than-human Car
Following Model
- Authors: Tianyu Shi, Yifei Ai, Omar ElSamadisy, Baher Abdulhai
- Abstract summary: Car following is a prime function in autonomous driving.
Recent literature shows that bilateral car following that considers the vehicle ahead and the vehicle behind exhibits better system stability.
We propose and introduce a Deep Reinforcement Learning framework for car following control by integrating bilateral information into both state and reward function.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the coming years and decades, autonomous vehicles (AVs) will become
increasingly prevalent, offering new opportunities for safer and more
convenient travel and potentially smarter traffic control methods exploiting
automation and connectivity. Car following is a prime function in autonomous
driving. Car following based on reinforcement learning has received attention
in recent years with the goal of learning and achieving performance levels
comparable to humans. However, most existing RL methods model car following as
a unilateral problem, sensing only the vehicle ahead. Recent literature,
however, Wang and Horn [16] has shown that bilateral car following that
considers the vehicle ahead and the vehicle behind exhibits better system
stability. In this paper we hypothesize that this bilateral car following can
be learned using RL, while learning other goals such as efficiency
maximisation, jerk minimization, and safety rewards leading to a learned model
that outperforms human driving.
We propose and introduce a Deep Reinforcement Learning (DRL) framework for
car following control by integrating bilateral information into both state and
reward function based on the bilateral control model (BCM) for car following
control. Furthermore, we use a decentralized multi-agent reinforcement learning
framework to generate the corresponding control action for each agent. Our
simulation results demonstrate that our learned policy is better than the human
driving policy in terms of (a) inter-vehicle headways, (b) average speed, (c)
jerk, (d) Time to Collision (TTC) and (e) string stability.
Related papers
- SECRM-2D: RL-Based Efficient and Comfortable Route-Following Autonomous Driving with Analytic Safety Guarantees [5.156059061769101]
SECRM-2D is an RL autonomous driving controller that balances optimization of efficiency and comfort and follows a fixed route.
We evaluate SECRM-2D against several learning and non-learning baselines in simulated test scenarios.
arXiv Detail & Related papers (2024-07-23T21:54:39Z) - Adaptive Autopilot: Constrained DRL for Diverse Driving Behaviors [12.812518632907771]
This study introduces adaptive autopilot (AA), a unique framework utilizing constrained-deep reinforcement learning (C-DRL)
AA aims to safely emulate human driving to reduce the necessity for driver intervention.
arXiv Detail & Related papers (2024-07-02T13:08:01Z) - MetaFollower: Adaptable Personalized Autonomous Car Following [63.90050686330677]
We propose an adaptable personalized car-following framework - MetaFollower.
We first utilize Model-Agnostic Meta-Learning (MAML) to extract common driving knowledge from various CF events.
We additionally combine Long Short-Term Memory (LSTM) and Intelligent Driver Model (IDM) to reflect temporal heterogeneity with high interpretability.
arXiv Detail & Related papers (2024-06-23T15:30:40Z) - Optimizing Autonomous Driving for Safety: A Human-Centric Approach with LLM-Enhanced RLHF [2.499371729440073]
Reinforcement Learning from Human Feedback (RLHF) is popular in large language models (LLMs)
RLHF is usually applied in the fine-tuning step, requiring direct human "preferences"
We will validate our model using data gathered from real-life testbeds located in New Jersey and New York City.
arXiv Detail & Related papers (2024-06-06T20:10:34Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Learning to drive from a world on rails [78.28647825246472]
We learn an interactive vision-based driving policy from pre-recorded driving logs via a model-based approach.
A forward model of the world supervises a driving policy that predicts the outcome of any potential driving trajectory.
Our method ranks first on the CARLA leaderboard, attaining a 25% higher driving score while using 40 times less data.
arXiv Detail & Related papers (2021-05-03T05:55:30Z) - Weakly Supervised Reinforcement Learning for Autonomous Highway Driving
via Virtual Safety Cages [42.57240271305088]
We present a reinforcement learning based approach to autonomous vehicle longitudinal control, where the rule-based safety cages provide enhanced safety for the vehicle as well as weak supervision to the reinforcement learning agent.
We show that when the model parameters are constrained or sub-optimal, the safety cages can enable a model to learn a safe driving policy even when the model could not be trained to drive through reinforcement learning alone.
arXiv Detail & Related papers (2021-03-17T15:30:36Z) - Deep Reinforcement Learning for Human-Like Driving Policies in Collision
Avoidance Tasks of Self-Driving Cars [1.160208922584163]
We introduce a model-free, deep reinforcement learning approach to generate automated human-like driving policies.
We study a static obstacle avoidance task on a two-lane highway road in simulation.
We demonstrate that our approach leads to human-like driving policies.
arXiv Detail & Related papers (2020-06-07T18:20:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.