A Fully Data-Driven Approach for Realistic Traffic Signal Control Using
Offline Reinforcement Learning
- URL: http://arxiv.org/abs/2311.15920v1
- Date: Mon, 27 Nov 2023 15:29:21 GMT
- Title: A Fully Data-Driven Approach for Realistic Traffic Signal Control Using
Offline Reinforcement Learning
- Authors: Jianxiong Li, Shichao Lin, Tianyu Shi, Chujie Tian, Yu Mei, Jian Song,
Xianyuan Zhan, Ruimin Li
- Abstract summary: We propose a fully Data-Driven and simulator-free framework for realistic Traffic Signal Control (D2TSC)
We combine well-established traffic flow theory with machine learning to infer the reward signals from coarse-grained traffic data.
Our approach achieves superior performance over conventional and offline RL baselines, and also enjoys much better real-world applicability.
- Score: 18.2541182874636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The optimization of traffic signal control (TSC) is critical for an efficient
transportation system. In recent years, reinforcement learning (RL) techniques
have emerged as a popular approach for TSC and show promising results for
highly adaptive control. However, existing RL-based methods suffer from notably
poor real-world applicability and hardly have any successful deployments. The
reasons for such failures are mostly due to the reliance on over-idealized
traffic simulators for policy optimization, as well as using unrealistic
fine-grained state observations and reward signals that are not directly
obtainable from real-world sensors. In this paper, we propose a fully
Data-Driven and simulator-free framework for realistic Traffic Signal Control
(D2TSC). Specifically, we combine well-established traffic flow theory with
machine learning to construct a reward inference model to infer the reward
signals from coarse-grained traffic data. With the inferred rewards, we further
propose a sample-efficient offline RL method to enable direct signal control
policy learning from historical offline datasets of real-world intersections.
To evaluate our approach, we collect historical traffic data from a real-world
intersection, and develop a highly customized simulation environment that
strictly follows real data characteristics. We demonstrate through extensive
experiments that our approach achieves superior performance over conventional
and offline RL baselines, and also enjoys much better real-world applicability.
Related papers
- D5RL: Diverse Datasets for Data-Driven Deep Reinforcement Learning [99.33607114541861]
We propose a new benchmark for offline RL that focuses on realistic simulations of robotic manipulation and locomotion environments.
Our proposed benchmark covers state-based and image-based domains, and supports both offline RL and online fine-tuning evaluation.
arXiv Detail & Related papers (2024-08-15T22:27:00Z) - Preference Elicitation for Offline Reinforcement Learning [59.136381500967744]
We propose Sim-OPRL, an offline preference-based reinforcement learning algorithm.
Our algorithm employs a pessimistic approach for out-of-distribution data, and an optimistic approach for acquiring informative preferences about the optimal policy.
arXiv Detail & Related papers (2024-06-26T15:59:13Z) - CtRL-Sim: Reactive and Controllable Driving Agents with Offline Reinforcement Learning [38.63187494867502]
CtRL-Sim is a method that leverages return-conditioned offline reinforcement learning (RL) to efficiently generate reactive and controllable traffic agents.
We show that CtRL-Sim can generate realistic safety-critical scenarios while providing fine-grained control over agent behaviours.
arXiv Detail & Related papers (2024-03-29T02:10:19Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - MOTO: Offline Pre-training to Online Fine-tuning for Model-based Robot
Learning [52.101643259906915]
We study the problem of offline pre-training and online fine-tuning for reinforcement learning from high-dimensional observations.
Existing model-based offline RL methods are not suitable for offline-to-online fine-tuning in high-dimensional domains.
We propose an on-policy model-based method that can efficiently reuse prior data through model-based value expansion and policy regularization.
arXiv Detail & Related papers (2024-01-06T21:04:31Z) - Learning Realistic Traffic Agents in Closed-loop [36.38063449192355]
Reinforcement learning (RL) can train traffic agents to avoid infractions, but using RL alone results in unhuman-like driving behaviors.
We propose Reinforcing Traffic Rules (RTR) to match expert demonstrations under a traffic compliance constraint.
Our experiments show that RTR learns more realistic and generalizable traffic simulation policies.
arXiv Detail & Related papers (2023-11-02T16:55:23Z) - Reinforcement Learning with Human Feedback for Realistic Traffic
Simulation [53.85002640149283]
Key element of effective simulation is the incorporation of realistic traffic models that align with human knowledge.
This study identifies two main challenges: capturing the nuances of human preferences on realism and the unification of diverse traffic simulation models.
arXiv Detail & Related papers (2023-09-01T19:29:53Z) - Reinforcement Learning Approaches for Traffic Signal Control under
Missing Data [5.896742981602458]
In real-world urban scenarios, missing observation of traffic states may frequently occur due to the lack of sensors.
We propose two solutions: the first one imputes the traffic states to enable adaptive control, and the second one imputes both states and rewards to enable adaptive control and the training of RL agents.
arXiv Detail & Related papers (2023-04-21T03:26:33Z) - Traffic Management of Autonomous Vehicles using Policy Based Deep
Reinforcement Learning and Intelligent Routing [0.26249027950824505]
We propose a DRL-based signal control system that adjusts traffic signals according to the current congestion situation on intersections.
To deal with the congestion on roads behind the intersection, we used re-routing technique to load balance the vehicles on road networks.
arXiv Detail & Related papers (2022-06-28T02:46:20Z) - ModelLight: Model-Based Meta-Reinforcement Learning for Traffic Signal
Control [5.219291917441908]
This paper proposes a novel model-based meta-reinforcement learning framework (ModelLight) for traffic signal control.
Within ModelLight, an ensemble of models for road intersections and the optimization-based meta-learning method are used to improve the data efficiency of an RL-based traffic light control method.
Experiments on real-world datasets demonstrate that ModelLight can outperform state-of-the-art traffic light control algorithms.
arXiv Detail & Related papers (2021-11-15T20:25:08Z) - Behavioral Priors and Dynamics Models: Improving Performance and Domain
Transfer in Offline RL [82.93243616342275]
We introduce Offline Model-based RL with Adaptive Behavioral Priors (MABE)
MABE is based on the finding that dynamics models, which support within-domain generalization, and behavioral priors, which support cross-domain generalization, are complementary.
In experiments that require cross-domain generalization, we find that MABE outperforms prior methods.
arXiv Detail & Related papers (2021-06-16T20:48:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.