A DRL-based Multiagent Cooperative Control Framework for CAV Networks: a
Graphic Convolution Q Network
- URL: http://arxiv.org/abs/2010.05437v1
- Date: Mon, 12 Oct 2020 03:53:58 GMT
- Title: A DRL-based Multiagent Cooperative Control Framework for CAV Networks: a
Graphic Convolution Q Network
- Authors: Jiqian Dong, Sikai Chen, Paul Young Joun Ha, Yujie Li, Samuel Labi
- Abstract summary: Connected Autonomous Vehicle (CAV) Network can be defined as a collection of CAVs operating at different locations on a multilane corridor.
In this paper, a novel Deep Reinforcement Learning (DRL) based approach combining Graphic Convolution Neural Network (GCN) and Deep Q Network (DQN) is proposed as the information fusion module and decision processor.
- Score: 2.146837165387593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Connected Autonomous Vehicle (CAV) Network can be defined as a collection of
CAVs operating at different locations on a multilane corridor, which provides a
platform to facilitate the dissemination of operational information as well as
control instructions. Cooperation is crucial in CAV operating systems since it
can greatly enhance operation in terms of safety and mobility, and high-level
cooperation between CAVs can be expected by jointly plan and control within CAV
network. However, due to the highly dynamic and combinatory nature such as
dynamic number of agents (CAVs) and exponentially growing joint action space in
a multiagent driving task, achieving cooperative control is NP hard and cannot
be governed by any simple rule-based methods. In addition, existing literature
contains abundant information on autonomous driving's sensing technology and
control logic but relatively little guidance on how to fuse the information
acquired from collaborative sensing and build decision processor on top of
fused information. In this paper, a novel Deep Reinforcement Learning (DRL)
based approach combining Graphic Convolution Neural Network (GCN) and Deep Q
Network (DQN), namely Graphic Convolution Q network (GCQ) is proposed as the
information fusion module and decision processor. The proposed model can
aggregate the information acquired from collaborative sensing and output safe
and cooperative lane changing decisions for multiple CAVs so that individual
intention can be satisfied even under a highly dynamic and partially observed
mixed traffic. The proposed algorithm can be deployed on centralized control
infrastructures such as road-side units (RSU) or cloud platforms to improve the
CAV operation.
Related papers
- Cooperative Cognitive Dynamic System in UAV Swarms: Reconfigurable Mechanism and Framework [80.39138462246034]
We propose the cooperative cognitive dynamic system (CCDS) to optimize the management for UAV swarms.
CCDS is a hierarchical and cooperative control structure that enables real-time data processing and decision.
In addition, CCDS can be integrated with the biomimetic mechanism to efficiently allocate tasks for UAV swarms.
arXiv Detail & Related papers (2024-05-18T12:45:00Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - A Novel Multi-Agent Deep RL Approach for Traffic Signal Control [13.927155702352131]
We propose a Friend-Deep Q-network (Friend-DQN) approach for multiple traffic signal control in urban networks.
In particular, the cooperation between multiple agents can reduce the state-action space and thus speed up the convergence.
arXiv Detail & Related papers (2023-06-05T08:20:37Z) - Distributed-Training-and-Execution Multi-Agent Reinforcement Learning
for Power Control in HetNet [48.96004919910818]
We propose a multi-agent deep reinforcement learning (MADRL) based power control scheme for the HetNet.
To promote cooperation among agents, we develop a penalty-based Q learning (PQL) algorithm for MADRL systems.
In this way, an agent's policy can be learned by other agents more easily, resulting in a more efficient collaboration process.
arXiv Detail & Related papers (2022-12-15T17:01:56Z) - Spatial-Temporal-Aware Safe Multi-Agent Reinforcement Learning of
Connected Autonomous Vehicles in Challenging Scenarios [10.37986799561165]
Communication technologies enable coordination among connected and autonomous vehicles (CAVs)
We propose a framework of constrained multi-agent reinforcement learning (MARL) with a parallel safety shield for CAVs.
Results show that our proposed methodology significantly increases system safety and efficiency in challenging scenarios.
arXiv Detail & Related papers (2022-10-05T14:39:07Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - Scalable Perception-Action-Communication Loops with Convolutional and
Graph Neural Networks [208.15591625749272]
We present a perception-action-communication loop design using Vision-based Graph Aggregation and Inference (VGAI)
Our framework is implemented by a cascade of a convolutional and a graph neural network (CNN / GNN), addressing agent-level visual perception and feature learning.
We demonstrate that VGAI yields performance comparable to or better than other decentralized controllers.
arXiv Detail & Related papers (2021-06-24T23:57:21Z) - Leveraging the Capabilities of Connected and Autonomous Vehicles and
Multi-Agent Reinforcement Learning to Mitigate Highway Bottleneck Congestion [2.0010674945048468]
We present an RL-based multi-agent CAV control model to operate in mixed traffic.
The results suggest that even at CAV percent share of corridor traffic as low as 10%, CAVs can significantly mitigate bottlenecks in highway traffic.
arXiv Detail & Related papers (2020-10-12T03:52:10Z) - Facilitating Connected Autonomous Vehicle Operations Using
Space-weighted Information Fusion and Deep Reinforcement Learning Based
Control [6.463332275753283]
This paper describes a Deep Reinforcement Learning based approach that integrates the data collected through sensing and connectivity capabilities from other vehicles.
It is expected that implementation of the algorithm in CAVs can enhance the safety and mobility associated with CAV driving operations.
arXiv Detail & Related papers (2020-09-30T13:38:32Z) - A Multi-Agent Reinforcement Learning Approach For Safe and Efficient
Behavior Planning Of Connected Autonomous Vehicles [21.132777568170702]
We design an information-sharing-based reinforcement learning framework for connected autonomous vehicles.
We show that our approach can improve the CAV system's efficiency in terms of average velocity and comfort.
We construct an obstacle-at-corner scenario to show that the shared vision can help CAVs to observe obstacles earlier and take action to avoid traffic jams.
arXiv Detail & Related papers (2020-03-09T19:15:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.