A Modular and Transferable Reinforcement Learning Framework for the
Fleet Rebalancing Problem
- URL: http://arxiv.org/abs/2105.13284v1
- Date: Thu, 27 May 2021 16:32:28 GMT
- Title: A Modular and Transferable Reinforcement Learning Framework for the
Fleet Rebalancing Problem
- Authors: Erotokritos Skordilis, Yi Hou, Charles Tripp, Matthew Moniot, Peter
Graf, David Biagioni
- Abstract summary: We propose a modular framework for fleet rebalancing based on model-free reinforcement learning (RL)
We formulate RL state and action spaces as distributions over a grid of the operating area, making the framework scalable.
Numerical experiments, using real-world trip and network data, demonstrate that this approach has several distinct advantages over baseline methods.
- Score: 2.299872239734834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mobility on demand (MoD) systems show great promise in realizing flexible and
efficient urban transportation. However, significant technical challenges arise
from operational decision making associated with MoD vehicle dispatch and fleet
rebalancing. For this reason, operators tend to employ simplified algorithms
that have been demonstrated to work well in a particular setting. To help
bridge the gap between novel and existing methods, we propose a modular
framework for fleet rebalancing based on model-free reinforcement learning (RL)
that can leverage an existing dispatch method to minimize system cost. In
particular, by treating dispatch as part of the environment dynamics, a
centralized agent can learn to intermittently direct the dispatcher to
reposition free vehicles and mitigate against fleet imbalance. We formulate RL
state and action spaces as distributions over a grid partitioning of the
operating area, making the framework scalable and avoiding the complexities
associated with multiagent RL. Numerical experiments, using real-world trip and
network data, demonstrate that this approach has several distinct advantages
over baseline methods including: improved system cost; high degree of
adaptability to the selected dispatch method; and the ability to perform
scale-invariant transfer learning between problem instances with similar
vehicle and request distributions.
Related papers
- Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - LLM-Assisted Light: Leveraging Large Language Model Capabilities for Human-Mimetic Traffic Signal Control in Complex Urban Environments [3.7788636451616697]
This work introduces an innovative approach that integrates Large Language Models into traffic signal control systems.
A hybrid framework that augments LLMs with a suite of perception and decision-making tools is proposed.
The findings from our simulations attest to the system's adeptness in adjusting to a multiplicity of traffic environments.
arXiv Detail & Related papers (2024-03-13T08:41:55Z) - Online Relocating and Matching of Ride-Hailing Services: A Model-Based
Modular Approach [7.992568451498863]
This study proposes an innovative model-based modular approach (MMA) to dynamically optimize order matching and vehicle relocation in a ride-hailing platform.
MMA is capable of achieving superior systematic performance compared to batch matching and reinforcement-learning based methods.
arXiv Detail & Related papers (2023-10-13T12:45:52Z) - Safe Model-Based Multi-Agent Mean-Field Reinforcement Learning [48.667697255912614]
Mean-field reinforcement learning addresses the policy of a representative agent interacting with the infinite population of identical agents.
We propose Safe-M$3$-UCRL, the first model-based mean-field reinforcement learning algorithm that attains safe policies even in the case of unknown transitions.
Our algorithm effectively meets the demand in critical areas while ensuring service accessibility in regions with low demand.
arXiv Detail & Related papers (2023-06-29T15:57:07Z) - Supervised Permutation Invariant Networks for Solving the CVRP with
Bounded Fleet Size [3.5235974685889397]
Learning to solve optimization problems, such as the vehicle routing problem, offers great computational advantages.
We propose a powerful supervised deep learning framework that constructs a complete tour plan from scratch while respecting an apriori fixed number of vehicles.
In combination with an efficient post-processing scheme, our supervised approach is not only much faster and easier to train but also competitive results.
arXiv Detail & Related papers (2022-01-05T10:32:18Z) - Relative Distributed Formation and Obstacle Avoidance with Multi-agent
Reinforcement Learning [20.401609420707867]
We propose a distributed formation and obstacle avoidance method based on multi-agent reinforcement learning (MARL)
Our method achieves better performance regarding formation error, formation convergence rate and on-par success rate of obstacle avoidance compared with baselines.
arXiv Detail & Related papers (2021-11-14T13:02:45Z) - Robust Dynamic Bus Control: A Distributional Multi-agent Reinforcement
Learning Approach [11.168121941015013]
Bus bunching is a common phenomenon that undermines the efficiency and reliability of bus systems.
We develop a distributional MARL framework -- IQNC-M -- to learn continuous control.
Our results show that the proposed IQNC-M framework can effectively handle the various extreme events.
arXiv Detail & Related papers (2021-11-02T23:41:09Z) - Value Function is All You Need: A Unified Learning Framework for Ride
Hailing Platforms [57.21078336887961]
Large ride-hailing platforms, such as DiDi, Uber and Lyft, connect tens of thousands of vehicles in a city to millions of ride demands throughout the day.
We propose a unified value-based dynamic learning framework (V1D3) for tackling both tasks.
arXiv Detail & Related papers (2021-05-18T19:22:24Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z) - Multi-intersection Traffic Optimisation: A Benchmark Dataset and a
Strong Baseline [85.9210953301628]
Control of traffic signals is fundamental and critical to alleviate traffic congestion in urban areas.
Because of the high complexity of modelling the problem, experimental settings of current works are often inconsistent.
We propose a novel and strong baseline model based on deep reinforcement learning with the encoder-decoder structure.
arXiv Detail & Related papers (2021-01-24T03:55:39Z) - Vehicular Cooperative Perception Through Action Branching and Federated
Reinforcement Learning [101.64598586454571]
A novel framework is proposed to allow reinforcement learning-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs)
A federated RL approach is introduced in order to speed up the training process across vehicles.
Results show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.
arXiv Detail & Related papers (2020-12-07T02:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.