HumanLight: Incentivizing Ridesharing via Human-centric Deep
Reinforcement Learning in Traffic Signal Control
- URL: http://arxiv.org/abs/2304.03697v1
- Date: Wed, 5 Apr 2023 17:42:30 GMT
- Title: HumanLight: Incentivizing Ridesharing via Human-centric Deep
Reinforcement Learning in Traffic Signal Control
- Authors: Dimitris M. Vlachogiannis, Hua Wei, Scott Moura, Jane Macfarlane
- Abstract summary: We present HumanLight, a novel decentralized adaptive traffic signal control algorithm.
Our proposed controller is founded on reinforcement learning with the reward function embedding the transportation-inspired concept of pressure at the person-level.
By rewarding HOV commuters with travel time savings for their efforts to merge into a single ride, HumanLight achieves equitable allocation of green times.
- Score: 3.402002554852499
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Single occupancy vehicles are the most attractive transportation alternative
for many commuters, leading to increased traffic congestion and air pollution.
Advancements in information technologies create opportunities for smart
solutions that incentivize ridesharing and mode shift to higher occupancy
vehicles (HOVs) to achieve the car lighter vision of cities. In this study, we
present HumanLight, a novel decentralized adaptive traffic signal control
algorithm designed to optimize people throughput at intersections. Our proposed
controller is founded on reinforcement learning with the reward function
embedding the transportation-inspired concept of pressure at the person-level.
By rewarding HOV commuters with travel time savings for their efforts to merge
into a single ride, HumanLight achieves equitable allocation of green times.
Apart from adopting FRAP, a state-of-the-art (SOTA) base model, HumanLight
introduces the concept of active vehicles, loosely defined as vehicles in
proximity to the intersection within the action interval window. The proposed
algorithm showcases significant headroom and scalability in different network
configurations considering multimodal vehicle splits at various scenarios of
HOV adoption. Improvements in person delays and queues range from 15% to over
55% compared to vehicle-level SOTA controllers. We quantify the impact of
incorporating active vehicles in the formulation of our RL model for different
network structures. HumanLight also enables regulation of the aggressiveness of
the HOV prioritization. The impact of parameter setting on the generated phase
profile is investigated as a key component of acyclic signal controllers
affecting pedestrian waiting times. HumanLight's scalable, decentralized design
can reshape the resolution of traffic management to be more human-centric and
empower policies that incentivize ridesharing and public transit systems.
Related papers
- DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - Learning to Control and Coordinate Mixed Traffic Through Robot Vehicles at Complex and Unsignalized Intersections [33.0086333735748]
We propose a multi-agent reinforcement learning approach for the control and coordination of mixed traffic by RVs at real-world, complex intersections.
Our method can prevent congestion formation via merely 5% RVs under a real-world traffic demand of 700 vehicles per hour.
Our method is robust against blackout events, sudden RV percentage drops, and V2V communication error.
arXiv Detail & Related papers (2023-01-12T21:09:58Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Eco-driving for Electric Connected Vehicles at Signalized Intersections:
A Parameterized Reinforcement Learning approach [6.475252042082737]
This paper proposes an eco-driving framework for electric connected vehicles (CVs) based on reinforcement learning (RL)
We show that our strategy can significantly reduce energy consumption by learning proper action schemes without any interruption of other human-driven vehicles (HDVs)
arXiv Detail & Related papers (2022-06-24T04:11:28Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - Learning to Help Emergency Vehicles Arrive Faster: A Cooperative
Vehicle-Road Scheduling Approach [24.505687255063986]
Vehicle-centric scheduling approaches recommend optimal paths for emergency vehicles.
Road-centric scheduling approaches aim to improve the traffic condition and assign a higher priority for EVs to pass an intersection.
We propose LEVID, a cooperative VehIcle-roaD scheduling approach including a real-time route planning module and a collaborative traffic signal control module.
arXiv Detail & Related papers (2022-02-20T10:25:15Z) - Decentralized Cooperative Lane Changing at Freeway Weaving Areas Using
Multi-Agent Deep Reinforcement Learning [1.6752182911522522]
Frequent lane changes during congestion at freeway bottlenecks such as merge and weaving areas further reduce roadway capacity.
The emergence of deep reinforcement learning (RL) and connected and automated vehicle technology provides a possible solution to improve mobility and energy efficiency at freeway bottlenecks through cooperative lane changing.
In this study, a decentralized cooperative lane-changing controller was developed using a multi-agent deep RL paradigm.
The results of this study show that cooperative lane changing enabled by multi-agent deep RL yields superior performance to human drivers in term of traffic throughput, vehicle speed, number of stops per vehicle, vehicle fuel efficiency, and emissions.
arXiv Detail & Related papers (2021-10-05T18:29:13Z) - Integrated Decision and Control at Multi-Lane Intersections with Mixed
Traffic Flow [6.233422723925688]
This paper develops a learning-based algorithm to deal with complex intersections with mixed traffic flows.
We first consider different velocity models for green and red lights in the training process and use a finite state machine to handle different modes of light transformation.
Then we design different types of distance constraints for vehicles, traffic lights, pedestrians, bicycles respectively and formulize the constrained optimal control problems.
arXiv Detail & Related papers (2021-08-30T07:55:32Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.