Vehicle Dispatching and Routing of On-Demand Intercity Ride-Pooling Services: A Multi-Agent Hierarchical Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2307.06742v2
- Date: Wed, 20 Mar 2024 05:43:00 GMT
- Title: Vehicle Dispatching and Routing of On-Demand Intercity Ride-Pooling Services: A Multi-Agent Hierarchical Reinforcement Learning Approach
- Authors: Jinhua Si, Fang He, Xi Lin, Xindi Tang,
- Abstract summary: Intercity ride-pooling service exhibits considerable potential in upgrading traditional intercity bus services.
Online operations suffer the inherent complexities due to the coupling of vehicle resource allocation among cities and pooled-ride vehicle routing.
This study proposes a two-level framework designed to facilitate online fleet management.
- Score: 4.44413304473005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The integrated development of city clusters has given rise to an increasing demand for intercity travel. Intercity ride-pooling service exhibits considerable potential in upgrading traditional intercity bus services by implementing demand-responsive enhancements. Nevertheless, its online operations suffer the inherent complexities due to the coupling of vehicle resource allocation among cities and pooled-ride vehicle routing. To tackle these challenges, this study proposes a two-level framework designed to facilitate online fleet management. Specifically, a novel multi-agent feudal reinforcement learning model is proposed at the upper level of the framework to cooperatively assign idle vehicles to different intercity lines, while the lower level updates the routes of vehicles using an adaptive large neighborhood search heuristic. Numerical studies based on the realistic dataset of Xiamen and its surrounding cities in China show that the proposed framework effectively mitigates the supply and demand imbalances, and achieves significant improvement in both the average daily system profit and order fulfillment ratio.
Related papers
- A methodological framework for Resilience as a Service (RaaS) in multimodal urban transportation networks [0.0]
This study aims to explore the management of public transport disruptions through resilience as a service strategies.
It develops an optimization model to effectively allocate resources and minimize the cost for operators and passengers.
The proposed model is applied to a case study in the Ile de France region, Paris and suburbs.
arXiv Detail & Related papers (2024-08-30T12:22:34Z) - GPT-Augmented Reinforcement Learning with Intelligent Control for Vehicle Dispatching [82.19172267487998]
GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.
This paper introduces GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.
arXiv Detail & Related papers (2024-08-19T08:23:38Z) - Safe Model-Based Multi-Agent Mean-Field Reinforcement Learning [48.667697255912614]
Mean-field reinforcement learning addresses the policy of a representative agent interacting with the infinite population of identical agents.
We propose Safe-M$3$-UCRL, the first model-based mean-field reinforcement learning algorithm that attains safe policies even in the case of unknown transitions.
Our algorithm effectively meets the demand in critical areas while ensuring service accessibility in regions with low demand.
arXiv Detail & Related papers (2023-06-29T15:57:07Z) - A Modular and Transferable Reinforcement Learning Framework for the
Fleet Rebalancing Problem [2.299872239734834]
We propose a modular framework for fleet rebalancing based on model-free reinforcement learning (RL)
We formulate RL state and action spaces as distributions over a grid of the operating area, making the framework scalable.
Numerical experiments, using real-world trip and network data, demonstrate that this approach has several distinct advantages over baseline methods.
arXiv Detail & Related papers (2021-05-27T16:32:28Z) - Value Function is All You Need: A Unified Learning Framework for Ride
Hailing Platforms [57.21078336887961]
Large ride-hailing platforms, such as DiDi, Uber and Lyft, connect tens of thousands of vehicles in a city to millions of ride demands throughout the day.
We propose a unified value-based dynamic learning framework (V1D3) for tackling both tasks.
arXiv Detail & Related papers (2021-05-18T19:22:24Z) - Flatland Competition 2020: MAPF and MARL for Efficient Train
Coordination on a Grid World [49.80905654161763]
The Flatland competition aimed at finding novel approaches to solve the vehicle re-scheduling problem (VRSP)
The VRSP is concerned with scheduling trips in traffic networks and the re-scheduling of vehicles when disruptions occur.
The ever-growing complexity of modern railway networks makes dynamic real-time scheduling of traffic virtually impossible.
arXiv Detail & Related papers (2021-03-30T17:13:29Z) - Multi-intersection Traffic Optimisation: A Benchmark Dataset and a
Strong Baseline [85.9210953301628]
Control of traffic signals is fundamental and critical to alleviate traffic congestion in urban areas.
Because of the high complexity of modelling the problem, experimental settings of current works are often inconsistent.
We propose a novel and strong baseline model based on deep reinforcement learning with the encoder-decoder structure.
arXiv Detail & Related papers (2021-01-24T03:55:39Z) - Vehicular Cooperative Perception Through Action Branching and Federated
Reinforcement Learning [101.64598586454571]
A novel framework is proposed to allow reinforcement learning-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs)
A federated RL approach is introduced in order to speed up the training process across vehicles.
Results show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.
arXiv Detail & Related papers (2020-12-07T02:09:15Z) - FlexPool: A Distributed Model-Free Deep Reinforcement Learning Algorithm
for Joint Passengers & Goods Transportation [36.989179280016586]
This paper considers combining passenger transportation with goods delivery to improve vehicle-based transportation.
We propose FlexPool, a distributed model-free deep reinforcement learning algorithm that jointly serves passengers & goods workloads.
We show that FlexPool achieves 30% higher fleet utilization and 35% higher fuel efficiency in comparison to model-free approaches.
arXiv Detail & Related papers (2020-07-27T17:25:58Z) - Balancing Taxi Distribution in A City-Scale Dynamic Ridesharing Service:
A Hybrid Solution Based on Demand Learning [0.0]
We study the challenging problem of how to balance taxi distribution across a city in a dynamic ridesharing service.
We propose a hybrid solution involving a series of algorithms: the Correlated Pooling collects correlated rider requests, the Adjacency Ride-Matching based on Demand Learning assigns taxis to riders, and the Greedy Idle Movement aims to direct taxis without a current assignment to the areas with riders in need of service.
arXiv Detail & Related papers (2020-07-27T07:08:02Z) - Dynamic Queue-Jump Lane for Emergency Vehicles under Partially Connected
Settings: A Multi-Agent Deep Reinforcement Learning Approach [3.39322931607753]
Emergency vehicle (EMV) service is a key function of cities and is exceedingly challenging due to urban traffic congestion.
In this paper, we study the improvement of EMV service under V2X connectivity.
We consider the establishment of dynamic queue jump lanes (DQJLs) based on real-time coordination of connected vehicles in the presence of non-connected human-driven vehicles.
arXiv Detail & Related papers (2020-03-02T16:59:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.