A Multi-Agent Deep Reinforcement Learning Coordination Framework for
Connected and Automated Vehicles at Merging Roadways
- URL: http://arxiv.org/abs/2109.11672v1
- Date: Thu, 23 Sep 2021 22:26:52 GMT
- Title: A Multi-Agent Deep Reinforcement Learning Coordination Framework for
Connected and Automated Vehicles at Merging Roadways
- Authors: Sai Krishna Sumanth Nakka, Behdad Chalaki, Andreas Malikopoulos
- Abstract summary: Connected and automated vehicles (CAVs) have the potential to address congestion, accidents, energy consumption, and greenhouse gas emissions.
We propose a framework for coordinating CAVs such that stop-and-go driving is eliminated.
We demonstrate the coordination of CAVs through numerical simulations and show that a smooth traffic flow is achieved by eliminating stop-and-go driving.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The steady increase in the number of vehicles operating on the highways
continues to exacerbate congestion, accidents, energy consumption, and
greenhouse gas emissions. Emerging mobility systems, e.g., connected and
automated vehicles (CAVs), have the potential to directly address these issues
and improve transportation network efficiency and safety. In this paper, we
consider a highway merging scenario and propose a framework for coordinating
CAVs such that stop-and-go driving is eliminated. We use a decentralized form
of the actor-critic approach to deep reinforcement learning$-$multi-agent deep
deterministic policy gradient. We demonstrate the coordination of CAVs through
numerical simulations and show that a smooth traffic flow is achieved by
eliminating stop-and-go driving. Videos and plots of the simulation results can
be found at this supplemental
$\href{https://sites.google.com/view/ud-ids-lab/MADRL}{site}$.
Related papers
- CAV-AHDV-CAV: Mitigating Traffic Oscillations for CAVs through a Novel Car-Following Structure and Reinforcement Learning [8.63981338420553]
Connected and Automated Vehicles (CAVs) offer a promising solution to the challenges of mixed traffic with both CAVs and Human-Driven Vehicles (HDVs)
While HDVs rely on limited information, CAVs can leverage data from other CAVs for better decision-making.
We propose a novel "CAV-AHDV-CAV" car-following framework that treats the sequence of HDVs between two CAVs as a single entity.
arXiv Detail & Related papers (2024-06-23T15:38:29Z) - Agent-Agnostic Centralized Training for Decentralized Multi-Agent Cooperative Driving [17.659812774579756]
We propose an asymmetric actor-critic model that learns decentralized cooperative driving policies for autonomous vehicles.
By employing attention neural networks with masking, our approach efficiently manages real-world traffic dynamics and partial observability.
arXiv Detail & Related papers (2024-03-18T16:13:02Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Real-time Cooperative Vehicle Coordination at Unsignalized Road
Intersections [7.860567520771493]
Cooperative coordination at unsignalized road intersections aims to improve the safety driving traffic throughput for connected and automated vehicles.
We introduce a model-free Markov Decision Process (MDP) and tackle it by a Twin Delayed Deep Deterministic Policy (TD3)-based strategy in the deep reinforcement learning framework.
We show that the proposed strategy could achieve near-optimal performance in sub-static coordination scenarios and significantly improve control in the realistic continuous flow.
arXiv Detail & Related papers (2022-05-03T02:56:02Z) - CoTV: Cooperative Control for Traffic Light Signals and Connected
Autonomous Vehicles using Deep Reinforcement Learning [6.54928728791438]
This paper presents a multi-agent deep reinforcement learning (DRL) system called CoTV.
CoTV Cooperatively controls both Traffic light signals and connected autonomous Vehicles (CAV)
In the meantime, CoTV can also be easy to deploy by cooperating with only one CAV that is the nearest to the traffic light controller on each incoming road.
arXiv Detail & Related papers (2022-01-31T11:40:13Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Leveraging the Capabilities of Connected and Autonomous Vehicles and
Multi-Agent Reinforcement Learning to Mitigate Highway Bottleneck Congestion [2.0010674945048468]
We present an RL-based multi-agent CAV control model to operate in mixed traffic.
The results suggest that even at CAV percent share of corridor traffic as low as 10%, CAVs can significantly mitigate bottlenecks in highway traffic.
arXiv Detail & Related papers (2020-10-12T03:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.