Queue-based Eco-Driving at Roundabouts with Reinforcement Learning
- URL: http://arxiv.org/abs/2405.00625v2
- Date: Thu, 18 Jul 2024 13:38:31 GMT
- Title: Queue-based Eco-Driving at Roundabouts with Reinforcement Learning
- Authors: Anna-Lena Schlamp, Werner Huber, Stefanie Schmidtner,
- Abstract summary: We address eco-driving at roundabouts in mixed traffic to enhance traffic flow and traffic efficiency.
We develop two approaches: a rule-based and an Reinforcement Learning based eco-driving system.
Results show that both approaches outperform the baseline.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We address eco-driving at roundabouts in mixed traffic to enhance traffic flow and traffic efficiency in urban areas. The aim is to proactively optimize speed of automated or non-automated connected vehicles (CVs), ensuring both an efficient approach and smooth entry into roundabouts. We incorporate the traffic situation ahead, i.e. preceding vehicles and waiting queues. Further, we develop two approaches: a rule-based and an Reinforcement Learning (RL) based eco-driving system, with both using the approach link and information from conflicting CVs for speed optimization. A fair comparison of rule-based and RL-based approaches is performed to explore RL as a viable alternative to classical optimization. Results show that both approaches outperform the baseline. Improvements significantly increase with growing traffic volumes, leading to best results on average being obtained at high volumes. Near capacity, performance deteriorates, indicating limited applicability at capacity limits. Examining different CV penetration rates, a decline in performance is observed, but with substantial results still being achieved at lower CV rates. RL agents can discover effective policies for speed optimization in dynamic roundabout settings, but they do not offer a substantial advantage over classical approaches, especially at higher traffic volumes or lower CV penetration rates.
Related papers
- Reinforcement Learning for Adaptive Traffic Signal Control: Turn-Based and Time-Based Approaches to Reduce Congestion [2.733700237741334]
This paper explores the use of Reinforcement Learning to enhance traffic signal operations at intersections.
We introduce two RL-based algorithms: a turn-based agent, which dynamically prioritizes traffic signals based on real-time queue lengths, and a time-based agent, which adjusts signal phase durations according to traffic conditions.
Simulation results demonstrate that both RL algorithms significantly outperform conventional traffic signal control systems.
arXiv Detail & Related papers (2024-08-28T12:35:56Z) - Quantum Annealing Approach for the Optimal Real-time Traffic Control using QUBO [17.027096728412758]
Traffic congestion is one of the major issues in urban areas.
How to control the traffic flow to mitigate the congestion has been one of the central issues in transportation research.
arXiv Detail & Related papers (2024-03-14T01:24:19Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - iPLAN: Intent-Aware Planning in Heterogeneous Traffic via Distributed
Multi-Agent Reinforcement Learning [57.24340061741223]
We introduce a distributed multi-agent reinforcement learning (MARL) algorithm that can predict trajectories and intents in dense and heterogeneous traffic scenarios.
Our approach for intent-aware planning, iPLAN, allows agents to infer nearby drivers' intents solely from their local observations.
arXiv Detail & Related papers (2023-06-09T20:12:02Z) - Adaptive Frequency Green Light Optimal Speed Advisory based on Hybrid
Actor-Critic Reinforcement Learning [2.257737378757467]
GLOSA system suggests speeds to vehicles to assist them in passing through intersections during green intervals.
Previous research has focused on optimizing the GLOSA algorithm, neglecting the frequency of speed advisory.
We propose an Adaptive Frequency GLOSA model based on Hybrid Proximal Policy Optimization (H-PPO) method.
arXiv Detail & Related papers (2023-06-07T01:16:45Z) - Deep Reinforcement Learning to Maximize Arterial Usage during Extreme
Congestion [4.934817254755007]
We propose a Deep Reinforcement Learning (DRL) approach to reduce traffic congestion on multi-lane freeways during extreme congestion.
Agent is trained to learn adaptive detouring strategies for congested freeway traffic.
Agent can improve average traffic speed by 21% when compared to no-action during steep congestion.
arXiv Detail & Related papers (2023-05-16T16:53:27Z) - LCS-TF: Multi-Agent Deep Reinforcement Learning-Based Intelligent
Lane-Change System for Improving Traffic Flow [16.34175752810212]
Existing intelligent lane-change solutions have primarily focused on optimizing the performance of the ego vehicle.
Recent research has seen an increased interest in multi-agent reinforcement learning (MARL)-based approaches.
We present a novel hybrid MARL-based intelligent lane-change system for AVs designed to jointly optimize the local performance for the ego vehicle.
arXiv Detail & Related papers (2023-03-16T04:03:17Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - A Deep Value-network Based Approach for Multi-Driver Order Dispatching [55.36656442934531]
We propose a deep reinforcement learning based solution for order dispatching.
We conduct large scale online A/B tests on DiDi's ride-dispatching platform.
Results show that CVNet consistently outperforms other recently proposed dispatching methods.
arXiv Detail & Related papers (2021-06-08T16:27:04Z) - Value Function is All You Need: A Unified Learning Framework for Ride
Hailing Platforms [57.21078336887961]
Large ride-hailing platforms, such as DiDi, Uber and Lyft, connect tens of thousands of vehicles in a city to millions of ride demands throughout the day.
We propose a unified value-based dynamic learning framework (V1D3) for tackling both tasks.
arXiv Detail & Related papers (2021-05-18T19:22:24Z) - DADA: Differentiable Automatic Data Augmentation [58.560309490774976]
We propose Differentiable Automatic Data Augmentation (DADA) which dramatically reduces the cost.
We conduct extensive experiments on CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets.
Results show our DADA is at least one order of magnitude faster than the state-of-the-art while achieving very comparable accuracy.
arXiv Detail & Related papers (2020-03-08T13:23:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.