Leveraging the Capabilities of Connected and Autonomous Vehicles and
Multi-Agent Reinforcement Learning to Mitigate Highway Bottleneck Congestion
- URL: http://arxiv.org/abs/2010.05436v1
- Date: Mon, 12 Oct 2020 03:52:10 GMT
- Title: Leveraging the Capabilities of Connected and Autonomous Vehicles and
Multi-Agent Reinforcement Learning to Mitigate Highway Bottleneck Congestion
- Authors: Paul Young Joun Ha, Sikai Chen, Jiqian Dong, Runjia Du, Yujie Li,
Samuel Labi
- Abstract summary: We present an RL-based multi-agent CAV control model to operate in mixed traffic.
The results suggest that even at CAV percent share of corridor traffic as low as 10%, CAVs can significantly mitigate bottlenecks in highway traffic.
- Score: 2.0010674945048468
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active Traffic Management strategies are often adopted in real-time to
address such sudden flow breakdowns. When queuing is imminent, Speed
Harmonization (SH), which adjusts speeds in upstream traffic to mitigate
traffic showckwaves downstream, can be applied. However, because SH depends on
driver awareness and compliance, it may not always be effective in mitigating
congestion. The use of multiagent reinforcement learning for collaborative
learning, is a promising solution to this challenge. By incorporating this
technique in the control algorithms of connected and autonomous vehicle (CAV),
it may be possible to train the CAVs to make joint decisions that can mitigate
highway bottleneck congestion without human driver compliance to altered speed
limits. In this regard, we present an RL-based multi-agent CAV control model to
operate in mixed traffic (both CAVs and human-driven vehicles (HDVs)). The
results suggest that even at CAV percent share of corridor traffic as low as
10%, CAVs can significantly mitigate bottlenecks in highway traffic. Another
objective was to assess the efficacy of the RL-based controller vis-\`a-vis
that of the rule-based controller. In addressing this objective, we duly
recognize that one of the main challenges of RL-based CAV controllers is the
variety and complexity of inputs that exist in the real world, such as the
information provided to the CAV by other connected entities and sensed
information. These translate as dynamic length inputs which are difficult to
process and learn from. For this reason, we propose the use of Graphical
Convolution Networks (GCN), a specific RL technique, to preserve information
network topology and corresponding dynamic length inputs. We then use this,
combined with Deep Deterministic Policy Gradient (DDPG), to carry out
multi-agent training for congestion mitigation using the CAV controllers.
Related papers
- Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - CAV-AHDV-CAV: Mitigating Traffic Oscillations for CAVs through a Novel Car-Following Structure and Reinforcement Learning [8.63981338420553]
Connected and Automated Vehicles (CAVs) offer a promising solution to the challenges of mixed traffic with both CAVs and Human-Driven Vehicles (HDVs)
While HDVs rely on limited information, CAVs can leverage data from other CAVs for better decision-making.
We propose a novel "CAV-AHDV-CAV" car-following framework that treats the sequence of HDVs between two CAVs as a single entity.
arXiv Detail & Related papers (2024-06-23T15:38:29Z) - Model-free Learning of Corridor Clearance: A Near-term Deployment
Perspective [5.39179984304986]
An emerging public health application of connected and automated vehicle (CAV) technologies is to reduce response times of emergency medical service (EMS) by indirectly coordinating traffic.
Existing research on this topic often overlooks the impact of EMS vehicle disruptions on regular traffic, assumes 100% CAV penetration, relies on real-time traffic signal timing data and queue lengths at intersections, and makes various assumptions about traffic settings when deriving optimal model-based CAV control strategies.
To overcome these challenges and enhance real-world applicability in near-term, we propose a model-free approach employing deep reinforcement learning (DRL) for designing CAV control strategies
arXiv Detail & Related papers (2023-12-16T06:08:53Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - Implementing Reinforcement Learning Datacenter Congestion Control in NVIDIA NICs [64.26714148634228]
congestion control (CC) algorithms become extremely difficult to design.
It is currently not possible to deploy AI models on network devices due to their limited computational capabilities.
We build a computationally-light solution based on a recent reinforcement learning CC algorithm.
arXiv Detail & Related papers (2022-07-05T20:42:24Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - A Multi-Agent Deep Reinforcement Learning Coordination Framework for
Connected and Automated Vehicles at Merging Roadways [0.0]
Connected and automated vehicles (CAVs) have the potential to address congestion, accidents, energy consumption, and greenhouse gas emissions.
We propose a framework for coordinating CAVs such that stop-and-go driving is eliminated.
We demonstrate the coordination of CAVs through numerical simulations and show that a smooth traffic flow is achieved by eliminating stop-and-go driving.
arXiv Detail & Related papers (2021-09-23T22:26:52Z) - Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous
Vehicles and Multi-Agent RL [63.52264764099532]
We study the ability of autonomous vehicles to improve the throughput of a bottleneck using a fully decentralized control scheme in a mixed autonomy setting.
We apply multi-agent reinforcement algorithms to this problem and demonstrate that significant improvements in bottleneck throughput, from 20% at a 5% penetration rate to 33% at a 40% penetration rate, can be achieved.
arXiv Detail & Related papers (2020-10-30T22:06:05Z) - A DRL-based Multiagent Cooperative Control Framework for CAV Networks: a
Graphic Convolution Q Network [2.146837165387593]
Connected Autonomous Vehicle (CAV) Network can be defined as a collection of CAVs operating at different locations on a multilane corridor.
In this paper, a novel Deep Reinforcement Learning (DRL) based approach combining Graphic Convolution Neural Network (GCN) and Deep Q Network (DQN) is proposed as the information fusion module and decision processor.
arXiv Detail & Related papers (2020-10-12T03:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.