Decentralized Cooperative Lane Changing at Freeway Weaving Areas Using
Multi-Agent Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2110.08124v1
- Date: Tue, 5 Oct 2021 18:29:13 GMT
- Title: Decentralized Cooperative Lane Changing at Freeway Weaving Areas Using
Multi-Agent Deep Reinforcement Learning
- Authors: Yi Hou, Peter Graf
- Abstract summary: Frequent lane changes during congestion at freeway bottlenecks such as merge and weaving areas further reduce roadway capacity.
The emergence of deep reinforcement learning (RL) and connected and automated vehicle technology provides a possible solution to improve mobility and energy efficiency at freeway bottlenecks through cooperative lane changing.
In this study, a decentralized cooperative lane-changing controller was developed using a multi-agent deep RL paradigm.
The results of this study show that cooperative lane changing enabled by multi-agent deep RL yields superior performance to human drivers in term of traffic throughput, vehicle speed, number of stops per vehicle, vehicle fuel efficiency, and emissions.
- Score: 1.6752182911522522
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Frequent lane changes during congestion at freeway bottlenecks such as merge
and weaving areas further reduce roadway capacity. The emergence of deep
reinforcement learning (RL) and connected and automated vehicle technology
provides a possible solution to improve mobility and energy efficiency at
freeway bottlenecks through cooperative lane changing. Deep RL is a collection
of machine-learning methods that enables an agent to improve its performance by
learning from the environment. In this study, a decentralized cooperative
lane-changing controller was developed using proximal policy optimization by
adopting a multi-agent deep RL paradigm. In the decentralized control strategy,
policy learning and action reward are evaluated locally, with each agent
(vehicle) getting access to global state information. Multi-agent deep RL
requires lower computational resources and is more scalable than single-agent
deep RL, making it a powerful tool for time-sensitive applications such as
cooperative lane changing. The results of this study show that cooperative lane
changing enabled by multi-agent deep RL yields superior performance to human
drivers in term of traffic throughput, vehicle speed, number of stops per
vehicle, vehicle fuel efficiency, and emissions. The trained RL policy is
transferable and can be generalized to uncongested, moderately congested, and
extremely congested traffic conditions.
Related papers
- Agent-Agnostic Centralized Training for Decentralized Multi-Agent Cooperative Driving [17.659812774579756]
We propose an asymmetric actor-critic model that learns decentralized cooperative driving policies for autonomous vehicles.
By employing attention neural networks with masking, our approach efficiently manages real-world traffic dynamics and partial observability.
arXiv Detail & Related papers (2024-03-18T16:13:02Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - A Novel Multi-Agent Deep RL Approach for Traffic Signal Control [13.927155702352131]
We propose a Friend-Deep Q-network (Friend-DQN) approach for multiple traffic signal control in urban networks.
In particular, the cooperation between multiple agents can reduce the state-action space and thus speed up the convergence.
arXiv Detail & Related papers (2023-06-05T08:20:37Z) - LCS-TF: Multi-Agent Deep Reinforcement Learning-Based Intelligent
Lane-Change System for Improving Traffic Flow [16.34175752810212]
Existing intelligent lane-change solutions have primarily focused on optimizing the performance of the ego vehicle.
Recent research has seen an increased interest in multi-agent reinforcement learning (MARL)-based approaches.
We present a novel hybrid MARL-based intelligent lane-change system for AVs designed to jointly optimize the local performance for the ego vehicle.
arXiv Detail & Related papers (2023-03-16T04:03:17Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Cooperative Reinforcement Learning on Traffic Signal Control [3.759936323189418]
Traffic signal control is a challenging real-world problem aiming to minimize overall travel time by coordinating vehicle movements at road intersections.
Existing traffic signal control systems in use still rely heavily on oversimplified information and rule-based methods.
This paper proposes a cooperative, multi-objective architecture with age-decaying weights to better estimate multiple reward terms for traffic signal control optimization.
arXiv Detail & Related papers (2022-05-23T13:25:15Z) - Hybrid Reinforcement Learning-Based Eco-Driving Strategy for Connected
and Automated Vehicles at Signalized Intersections [3.401874022426856]
Vision-perceptive methods are integrated with vehicle-to-infrastructure (V2I) communications to achieve higher mobility and energy efficiency.
HRL framework has three components: a rule-based driving manager that operates the collaboration between the rule-based policies and the RL policy.
Experiments show that our HRL method can reduce energy consumption by 12.70% and save 11.75% travel time when compared with a state-of-the-art model-based Eco-Driving approach.
arXiv Detail & Related papers (2022-01-19T19:31:12Z) - MetaVIM: Meta Variationally Intrinsic Motivated Reinforcement Learning for Decentralized Traffic Signal Control [54.162449208797334]
Traffic signal control aims to coordinate traffic signals across intersections to improve the traffic efficiency of a district or a city.
Deep reinforcement learning (RL) has been applied to traffic signal control recently and demonstrated promising performance where each traffic signal is regarded as an agent.
We propose a novel Meta Variationally Intrinsic Motivated (MetaVIM) RL method to learn the decentralized policy for each intersection that considers neighbor information in a latent way.
arXiv Detail & Related papers (2021-01-04T03:06:08Z) - Vehicular Cooperative Perception Through Action Branching and Federated
Reinforcement Learning [101.64598586454571]
A novel framework is proposed to allow reinforcement learning-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs)
A federated RL approach is introduced in order to speed up the training process across vehicles.
Results show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.
arXiv Detail & Related papers (2020-12-07T02:09:15Z) - Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous
Vehicles and Multi-Agent RL [63.52264764099532]
We study the ability of autonomous vehicles to improve the throughput of a bottleneck using a fully decentralized control scheme in a mixed autonomy setting.
We apply multi-agent reinforcement algorithms to this problem and demonstrate that significant improvements in bottleneck throughput, from 20% at a 5% penetration rate to 33% at a 40% penetration rate, can be achieved.
arXiv Detail & Related papers (2020-10-30T22:06:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.