DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback
- URL: http://arxiv.org/abs/2306.07553v1
- Date: Tue, 13 Jun 2023 05:58:57 GMT
- Title: DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback
- Authors: Junfan Lin, Yuying Zhu, Lingbo Liu, Yang Liu, Guanbin Li, Liang Lin
- Abstract summary: Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
- Score: 109.84667902348498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic Signal Control (TSC) aims to reduce the average travel time of
vehicles in a road network, which in turn enhances fuel utilization efficiency,
air quality, and road safety, benefiting society as a whole. Due to the
complexity of long-horizon control and coordination, most prior TSC methods
leverage deep reinforcement learning (RL) to search for a control policy and
have witnessed great success. However, TSC still faces two significant
challenges. 1) The travel time of a vehicle is delayed feedback on the
effectiveness of TSC policy at each traffic intersection since it is obtained
after the vehicle has left the road network. Although several heuristic reward
functions have been proposed as substitutes for travel time, they are usually
biased and not leading the policy to improve in the correct direction. 2) The
traffic condition of each intersection is influenced by the non-local
intersections since vehicles traverse multiple intersections over time.
Therefore, the TSC agent is required to leverage both the local observation and
the non-local traffic conditions to predict the long-horizontal traffic
conditions of each intersection comprehensively. To address these challenges,
we propose DenseLight, a novel RL-based TSC method that employs an unbiased
reward function to provide dense feedback on policy effectiveness and a
non-local enhanced TSC agent to better predict future traffic conditions for
more precise traffic control. Extensive experiments and ablation studies
demonstrate that DenseLight can consistently outperform advanced baselines on
various road networks with diverse traffic flows. The code is available at
https://github.com/junfanlin/DenseLight.
Related papers
- Reinforcement Learning for Adaptive Traffic Signal Control: Turn-Based and Time-Based Approaches to Reduce Congestion [2.733700237741334]
This paper explores the use of Reinforcement Learning to enhance traffic signal operations at intersections.
We introduce two RL-based algorithms: a turn-based agent, which dynamically prioritizes traffic signals based on real-time queue lengths, and a time-based agent, which adjusts signal phase durations according to traffic conditions.
Simulation results demonstrate that both RL algorithms significantly outperform conventional traffic signal control systems.
arXiv Detail & Related papers (2024-08-28T12:35:56Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - Deep Reinforcement Learning for the Joint Control of Traffic Light
Signaling and Vehicle Speed Advice [8.506271224735029]
We propose a first attempt to jointly learn the control of both traffic light control and vehicle speed advice.
In our experiments, the joint control approach reduces average vehicle trip delays, w.r.t. controlling only traffic lights, in eight out of eleven benchmark scenarios.
arXiv Detail & Related papers (2023-09-18T15:45:22Z) - Deep Reinforcement Learning to Maximize Arterial Usage during Extreme
Congestion [4.934817254755007]
We propose a Deep Reinforcement Learning (DRL) approach to reduce traffic congestion on multi-lane freeways during extreme congestion.
Agent is trained to learn adaptive detouring strategies for congested freeway traffic.
Agent can improve average traffic speed by 21% when compared to no-action during steep congestion.
arXiv Detail & Related papers (2023-05-16T16:53:27Z) - Reinforcement Learning Approaches for Traffic Signal Control under
Missing Data [5.896742981602458]
In real-world urban scenarios, missing observation of traffic states may frequently occur due to the lack of sensors.
We propose two solutions: the first one imputes the traffic states to enable adaptive control, and the second one imputes both states and rewards to enable adaptive control and the training of RL agents.
arXiv Detail & Related papers (2023-04-21T03:26:33Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - Integrated Decision and Control at Multi-Lane Intersections with Mixed
Traffic Flow [6.233422723925688]
This paper develops a learning-based algorithm to deal with complex intersections with mixed traffic flows.
We first consider different velocity models for green and red lights in the training process and use a finite state machine to handle different modes of light transformation.
Then we design different types of distance constraints for vehicles, traffic lights, pedestrians, bicycles respectively and formulize the constrained optimal control problems.
arXiv Detail & Related papers (2021-08-30T07:55:32Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - MetaVIM: Meta Variationally Intrinsic Motivated Reinforcement Learning for Decentralized Traffic Signal Control [54.162449208797334]
Traffic signal control aims to coordinate traffic signals across intersections to improve the traffic efficiency of a district or a city.
Deep reinforcement learning (RL) has been applied to traffic signal control recently and demonstrated promising performance where each traffic signal is regarded as an agent.
We propose a novel Meta Variationally Intrinsic Motivated (MetaVIM) RL method to learn the decentralized policy for each intersection that considers neighbor information in a latent way.
arXiv Detail & Related papers (2021-01-04T03:06:08Z) - Emergent Road Rules In Multi-Agent Driving Environments [84.82583370858391]
We analyze what ingredients in driving environments cause the emergence of road rules.
We find that two crucial factors are noisy perception and agents' spatial density.
Our results add empirical support for the social road rules that countries worldwide have agreed on for safe, efficient driving.
arXiv Detail & Related papers (2020-11-21T09:43:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.