Scalable Decentralized Cooperative Platoon using Multi-Agent Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2312.06858v1
- Date: Mon, 11 Dec 2023 22:04:38 GMT
- Title: Scalable Decentralized Cooperative Platoon using Multi-Agent Deep
Reinforcement Learning
- Authors: Ahmed Abdelrahman, Omar M. Shehata, Yarah Basyoni, and Elsayed I.
Morgan
- Abstract summary: This paper introduces a vehicle platooning approach designed to enhance traffic flow and safety.
It is developed using deep reinforcement learning in the Unity 3D game engine.
The proposed platooning model focuses on scalability, decentralization, and fostering positive cooperation.
- Score: 2.5499055723658097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cooperative autonomous driving plays a pivotal role in improving road
capacity and safety within intelligent transportation systems, particularly
through the deployment of autonomous vehicles on urban streets. By enabling
vehicle-to-vehicle communication, these systems expand the vehicles
environmental awareness, allowing them to detect hidden obstacles and thereby
enhancing safety and reducing crash rates compared to human drivers who rely
solely on visual perception. A key application of this technology is vehicle
platooning, where connected vehicles drive in a coordinated formation. This
paper introduces a vehicle platooning approach designed to enhance traffic flow
and safety. Developed using deep reinforcement learning in the Unity 3D game
engine, known for its advanced physics, this approach aims for a high-fidelity
physical simulation that closely mirrors real-world conditions. The proposed
platooning model focuses on scalability, decentralization, and fostering
positive cooperation through the introduced predecessor-follower "sharing and
caring" communication framework. The study demonstrates how these elements
collectively enhance autonomous driving performance and robustness, both for
individual vehicles and for the platoon as a whole, in an urban setting. This
results in improved road safety and reduced traffic congestion.
Related papers
- Enhancing Safety for Autonomous Agents in Partly Concealed Urban Traffic Environments Through Representation-Based Shielding [2.9685635948300004]
We propose a novel state representation for Reinforcement Learning (RL) agents centered around the information perceivable by an autonomous agent.
Our findings pave the way for more robust and reliable autonomous navigation strategies.
arXiv Detail & Related papers (2024-07-05T08:34:49Z) - Learning Driver Models for Automated Vehicles via Knowledge Sharing and
Personalization [2.07180164747172]
This paper describes a framework for learning Automated Vehicles (AVs) driver models via knowledge sharing between vehicles and personalization.
It finds several applications across transportation engineering including intelligent transportation systems, traffic management, and vehicle-to-vehicle communication.
arXiv Detail & Related papers (2023-08-31T17:18:15Z) - HumanLight: Incentivizing Ridesharing via Human-centric Deep
Reinforcement Learning in Traffic Signal Control [3.402002554852499]
We present HumanLight, a novel decentralized adaptive traffic signal control algorithm.
Our proposed controller is founded on reinforcement learning with the reward function embedding the transportation-inspired concept of pressure at the person-level.
By rewarding HOV commuters with travel time savings for their efforts to merge into a single ride, HumanLight achieves equitable allocation of green times.
arXiv Detail & Related papers (2023-04-05T17:42:30Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - Human-Vehicle Cooperative Visual Perception for Shared Autonomous
Driving [9.537146822132904]
This paper proposes a human-vehicle cooperative visual perception method to enhance the visual perception ability of shared autonomous driving.
Based on transfer learning, the mAP of object detection reaches 75.52% and lays a solid foundation for visual fusion.
This study pioneers a cooperative visual perception solution for shared autonomous driving and experiments in real-world complex traffic conflict scenarios.
arXiv Detail & Related papers (2021-12-17T03:17:05Z) - Efficient Connected and Automated Driving System with Multi-agent Graph
Reinforcement Learning [22.369111982782634]
Connected and automated vehicles (CAVs) have attracted more and more attention recently.
We focus on how to improve the outcomes of the total transportation system by allowing each automated vehicle to learn cooperation with each other.
arXiv Detail & Related papers (2020-07-06T14:55:48Z) - Intelligent Roundabout Insertion using Deep Reinforcement Learning [68.8204255655161]
We present a maneuver planning module able to negotiate the entering in busy roundabouts.
The proposed module is based on a neural network trained to predict when and how entering the roundabout throughout the whole duration of the maneuver.
arXiv Detail & Related papers (2020-01-03T11:16:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.