A DRL-based Multiagent Cooperative Control Framework for CAV Networks: a
Graphic Convolution Q Network
- URL: http://arxiv.org/abs/2010.05437v1
- Date: Mon, 12 Oct 2020 03:53:58 GMT
- Title: A DRL-based Multiagent Cooperative Control Framework for CAV Networks: a
Graphic Convolution Q Network
- Authors: Jiqian Dong, Sikai Chen, Paul Young Joun Ha, Yujie Li, Samuel Labi
- Abstract summary: Connected Autonomous Vehicle (CAV) Network can be defined as a collection of CAVs operating at different locations on a multilane corridor.
In this paper, a novel Deep Reinforcement Learning (DRL) based approach combining Graphic Convolution Neural Network (GCN) and Deep Q Network (DQN) is proposed as the information fusion module and decision processor.
- Score: 2.146837165387593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Connected Autonomous Vehicle (CAV) Network can be defined as a collection of
CAVs operating at different locations on a multilane corridor, which provides a
platform to facilitate the dissemination of operational information as well as
control instructions. Cooperation is crucial in CAV operating systems since it
can greatly enhance operation in terms of safety and mobility, and high-level
cooperation between CAVs can be expected by jointly plan and control within CAV
network. However, due to the highly dynamic and combinatory nature such as
dynamic number of agents (CAVs) and exponentially growing joint action space in
a multiagent driving task, achieving cooperative control is NP hard and cannot
be governed by any simple rule-based methods. In addition, existing literature
contains abundant information on autonomous driving's sensing technology and
control logic but relatively little guidance on how to fuse the information
acquired from collaborative sensing and build decision processor on top of
fused information. In this paper, a novel Deep Reinforcement Learning (DRL)
based approach combining Graphic Convolution Neural Network (GCN) and Deep Q
Network (DQN), namely Graphic Convolution Q network (GCQ) is proposed as the
information fusion module and decision processor. The proposed model can
aggregate the information acquired from collaborative sensing and output safe
and cooperative lane changing decisions for multiple CAVs so that individual
intention can be satisfied even under a highly dynamic and partially observed
mixed traffic. The proposed algorithm can be deployed on centralized control
infrastructures such as road-side units (RSU) or cloud platforms to improve the
CAV operation.
Related papers
- Communication-Control Codesign for Large-Scale Wireless Networked Control Systems [80.30532872347668]
Wireless Networked Control Systems (WNCSs) are essential to Industry 4.0, enabling flexible control in applications, such as drone swarms and autonomous robots.
We propose a practical WNCS model that captures correlated dynamics among multiple control loops with spatially distributed sensors and actuators sharing limited wireless resources over multi-state Markov block-fading channels.
We develop a Deep Reinforcement Learning (DRL) algorithm that efficiently handles the hybrid action space, captures communication-control correlations, and ensures robust training despite sparse cross-domain variables and floating control inputs.
arXiv Detail & Related papers (2024-10-15T06:28:21Z) - Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - Cooperative Cognitive Dynamic System in UAV Swarms: Reconfigurable Mechanism and Framework [80.39138462246034]
We propose the cooperative cognitive dynamic system (CCDS) to optimize the management for UAV swarms.
CCDS is a hierarchical and cooperative control structure that enables real-time data processing and decision.
In addition, CCDS can be integrated with the biomimetic mechanism to efficiently allocate tasks for UAV swarms.
arXiv Detail & Related papers (2024-05-18T12:45:00Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - A Novel Multi-Agent Deep RL Approach for Traffic Signal Control [13.927155702352131]
We propose a Friend-Deep Q-network (Friend-DQN) approach for multiple traffic signal control in urban networks.
In particular, the cooperation between multiple agents can reduce the state-action space and thus speed up the convergence.
arXiv Detail & Related papers (2023-06-05T08:20:37Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - Leveraging the Capabilities of Connected and Autonomous Vehicles and
Multi-Agent Reinforcement Learning to Mitigate Highway Bottleneck Congestion [2.0010674945048468]
We present an RL-based multi-agent CAV control model to operate in mixed traffic.
The results suggest that even at CAV percent share of corridor traffic as low as 10%, CAVs can significantly mitigate bottlenecks in highway traffic.
arXiv Detail & Related papers (2020-10-12T03:52:10Z) - Facilitating Connected Autonomous Vehicle Operations Using
Space-weighted Information Fusion and Deep Reinforcement Learning Based
Control [6.463332275753283]
This paper describes a Deep Reinforcement Learning based approach that integrates the data collected through sensing and connectivity capabilities from other vehicles.
It is expected that implementation of the algorithm in CAVs can enhance the safety and mobility associated with CAV driving operations.
arXiv Detail & Related papers (2020-09-30T13:38:32Z) - A Multi-Agent Reinforcement Learning Approach For Safe and Efficient
Behavior Planning Of Connected Autonomous Vehicles [21.132777568170702]
We design an information-sharing-based reinforcement learning framework for connected autonomous vehicles.
We show that our approach can improve the CAV system's efficiency in terms of average velocity and comfort.
We construct an obstacle-at-corner scenario to show that the shared vision can help CAVs to observe obstacles earlier and take action to avoid traffic jams.
arXiv Detail & Related papers (2020-03-09T19:15:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.