Towards More Efficient Shared Autonomous Mobility: A Learning-Based
Fleet Repositioning Approach
- URL: http://arxiv.org/abs/2210.08659v3
- Date: Thu, 8 Feb 2024 17:19:53 GMT
- Title: Towards More Efficient Shared Autonomous Mobility: A Learning-Based
Fleet Repositioning Approach
- Authors: Monika Filipovska, Michael Hyland, Haimanti Bala
- Abstract summary: This paper formulates SAMS fleet as a Markov Decision Process and presents a reinforcement learning-based repositioning (RLR) approach called integrated system-agent repositioning (ISR)
The ISR learns to respond to evolving demand patterns without explicit demand forecasting and to cooperate with optimization-based passenger-to-vehicle assignment.
Results demonstrate the RLR approaches' substantial reductions in passenger wait times, over 50%, relative to the JO approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Shared-use autonomous mobility services (SAMS) present new opportunities for
improving accessible and demand-responsive mobility. A fundamental challenge
that SAMS face is appropriate positioning of idle fleet vehicles to meet future
demand - a problem that strongly impacts service quality and efficiency. This
paper formulates SAMS fleet repositioning as a Markov Decision Process and
presents a reinforcement learning-based repositioning (RLR) approach called
integrated system-agent repositioning (ISR). The ISR learns a scalable fleet
repositioning strategy in an integrated manner: learning to respond to evolving
demand patterns without explicit demand forecasting and to cooperate with
optimization-based passenger-to-vehicle assignment. Numerical experiments are
conducted using New York City taxi data and an agent-based simulation tool. The
ISR is compared to an alternative RLR approach named externally guided
repositioning (EGR) and a benchmark joint optimization (JO) for
passenger-to-vehicle assignment and repositioning. The results demonstrate the
RLR approaches' substantial reductions in passenger wait times, over 50%,
relative to the JO approach. The ISR's ability to bypass demand forecasting is
also demonstrated as it maintains comparable performance to EGR in terms of
average metrics. The results also demonstrate the model's transferability to
evolving conditions, including unseen demand patterns, extended operational
periods, and changes in the assignment strategy.
Related papers
- MetaFollower: Adaptable Personalized Autonomous Car Following [63.90050686330677]
We propose an adaptable personalized car-following framework - MetaFollower.
We first utilize Model-Agnostic Meta-Learning (MAML) to extract common driving knowledge from various CF events.
We additionally combine Long Short-Term Memory (LSTM) and Intelligent Driver Model (IDM) to reflect temporal heterogeneity with high interpretability.
arXiv Detail & Related papers (2024-06-23T15:30:40Z) - i-Rebalance: Personalized Vehicle Repositioning for Supply Demand Balance [11.720716530010323]
We propose i-Rebalance, a personalized vehicle reposition technique with deep reinforcement learning (DRL)
i-Rebalance estimates drivers' decisions on accepting reposition recommendations through an on-field user study involving 99 real drivers.
Evaluation of real-world trajectory data shows that i-Rebalance improves driver acceptance rate by 38.07% and total driver income by 9.97%.
arXiv Detail & Related papers (2024-01-09T08:51:56Z) - ICF-SRSR: Invertible scale-Conditional Function for Self-Supervised
Real-world Single Image Super-Resolution [60.90817228730133]
Single image super-resolution (SISR) is a challenging problem that aims to up-sample a given low-resolution (LR) image to a high-resolution (HR) counterpart.
Recent approaches are trained on simulated LR images degraded by simplified down-sampling operators.
We propose a novel Invertible scale-Conditional Function (ICF) which can scale an input image and then restore the original input with different scale conditions.
arXiv Detail & Related papers (2023-07-24T12:42:45Z) - Safe Model-Based Multi-Agent Mean-Field Reinforcement Learning [48.667697255912614]
Mean-field reinforcement learning addresses the policy of a representative agent interacting with the infinite population of identical agents.
We propose Safe-M$3$-UCRL, the first model-based mean-field reinforcement learning algorithm that attains safe policies even in the case of unknown transitions.
Our algorithm effectively meets the demand in critical areas while ensuring service accessibility in regions with low demand.
arXiv Detail & Related papers (2023-06-29T15:57:07Z) - Fleet Rebalancing for Expanding Shared e-Mobility Systems: A Multi-agent
Deep Reinforcement Learning Approach [17.193480676611358]
A key challenge in the operation of shared e-mobility systems is fleet rebalancing.
We first investigate rich sets of data collected from a real-world shared e-mobility system for one year.
With the learned knowledge we design a high-fidelity simulator, which is able to abstract key operation details of EV sharing.
Then we model the rebalancing task for shared e-mobility systems under continuous expansion as a Multi-Agent Reinforcement Learning (MARL) problem.
arXiv Detail & Related papers (2022-11-11T11:25:30Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - A Modular and Transferable Reinforcement Learning Framework for the
Fleet Rebalancing Problem [2.299872239734834]
We propose a modular framework for fleet rebalancing based on model-free reinforcement learning (RL)
We formulate RL state and action spaces as distributions over a grid of the operating area, making the framework scalable.
Numerical experiments, using real-world trip and network data, demonstrate that this approach has several distinct advantages over baseline methods.
arXiv Detail & Related papers (2021-05-27T16:32:28Z) - Model-based Multi-agent Policy Optimization with Adaptive Opponent-wise
Rollouts [52.844741540236285]
This paper investigates the model-based methods in multi-agent reinforcement learning (MARL)
We propose a novel decentralized model-based MARL method, named Adaptive Opponent-wise Rollout Policy (AORPO)
arXiv Detail & Related papers (2021-05-07T16:20:22Z) - Equilibrium Inverse Reinforcement Learning for Ride-hailing Vehicle
Network [1.599072005190786]
We formulate the problem of passenger-vehicle matching in a sparsely connected graph.
We propose an algorithm to derive an equilibrium policy in a multi-agent environment.
arXiv Detail & Related papers (2021-02-13T03:18:44Z) - Learning Vehicle Routing Problems using Policy Optimisation [4.093722933440819]
State-of-the-art approaches learn a policy using reinforcement learning, and the learnt policy acts as a pseudo solver.
These approaches have demonstrated good performance in some cases, but given the large search space typical of routing problem, they can converge too quickly to poor policy.
We propose entropy regularised reinforcement learning (ERRL) that supports exploration by providing more policies.
arXiv Detail & Related papers (2020-12-24T14:18:56Z) - Vehicular Cooperative Perception Through Action Branching and Federated
Reinforcement Learning [101.64598586454571]
A novel framework is proposed to allow reinforcement learning-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs)
A federated RL approach is introduced in order to speed up the training process across vehicles.
Results show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.
arXiv Detail & Related papers (2020-12-07T02:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.