Towards More Efficient Shared Autonomous Mobility: A Learning-Based
Fleet Repositioning Approach
- URL: http://arxiv.org/abs/2210.08659v3
- Date: Thu, 8 Feb 2024 17:19:53 GMT
- Title: Towards More Efficient Shared Autonomous Mobility: A Learning-Based
Fleet Repositioning Approach
- Authors: Monika Filipovska, Michael Hyland, Haimanti Bala
- Abstract summary: This paper formulates SAMS fleet as a Markov Decision Process and presents a reinforcement learning-based repositioning (RLR) approach called integrated system-agent repositioning (ISR)
The ISR learns to respond to evolving demand patterns without explicit demand forecasting and to cooperate with optimization-based passenger-to-vehicle assignment.
Results demonstrate the RLR approaches' substantial reductions in passenger wait times, over 50%, relative to the JO approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Shared-use autonomous mobility services (SAMS) present new opportunities for
improving accessible and demand-responsive mobility. A fundamental challenge
that SAMS face is appropriate positioning of idle fleet vehicles to meet future
demand - a problem that strongly impacts service quality and efficiency. This
paper formulates SAMS fleet repositioning as a Markov Decision Process and
presents a reinforcement learning-based repositioning (RLR) approach called
integrated system-agent repositioning (ISR). The ISR learns a scalable fleet
repositioning strategy in an integrated manner: learning to respond to evolving
demand patterns without explicit demand forecasting and to cooperate with
optimization-based passenger-to-vehicle assignment. Numerical experiments are
conducted using New York City taxi data and an agent-based simulation tool. The
ISR is compared to an alternative RLR approach named externally guided
repositioning (EGR) and a benchmark joint optimization (JO) for
passenger-to-vehicle assignment and repositioning. The results demonstrate the
RLR approaches' substantial reductions in passenger wait times, over 50%,
relative to the JO approach. The ISR's ability to bypass demand forecasting is
also demonstrated as it maintains comparable performance to EGR in terms of
average metrics. The results also demonstrate the model's transferability to
evolving conditions, including unseen demand patterns, extended operational
periods, and changes in the assignment strategy.
Related papers
- MetaTrading: An Immersion-Aware Model Trading Framework for Vehicular Metaverse Services [94.61039892220037]
We present a novel immersion-aware model trading framework that incentivizes metaverse users (MUs) to contribute learning models for augmented reality (AR) services in the vehicular metaverse.
Considering dynamic network conditions and privacy concerns, we formulate the reward decisions of MSPs as a multi-agent Markov decision process.
Experimental results demonstrate that the proposed framework can effectively provide higher-value models for object detection and classification in AR services on real AR-related vehicle datasets.
arXiv Detail & Related papers (2024-10-25T16:20:46Z) - Physics Enhanced Residual Policy Learning (PERPL) for safety cruising in mixed traffic platooning under actuator and communication delay [8.172286651098027]
Linear control models have gained extensive application in vehicle control due to their simplicity, ease of use, and support for stability analysis.
Reinforcement learning (RL) models, on the other hand, offer adaptability but suffer from a lack of interpretability and generalization capabilities.
This paper aims to develop a family of RL-based controllers enhanced by physics-informed policies.
arXiv Detail & Related papers (2024-09-23T23:02:34Z) - A methodological framework for Resilience as a Service (RaaS) in multimodal urban transportation networks [0.0]
This study aims to explore the management of public transport disruptions through resilience as a service strategies.
It develops an optimization model to effectively allocate resources and minimize the cost for operators and passengers.
The proposed model is applied to a case study in the Ile de France region, Paris and suburbs.
arXiv Detail & Related papers (2024-08-30T12:22:34Z) - MetaFollower: Adaptable Personalized Autonomous Car Following [63.90050686330677]
We propose an adaptable personalized car-following framework - MetaFollower.
We first utilize Model-Agnostic Meta-Learning (MAML) to extract common driving knowledge from various CF events.
We additionally combine Long Short-Term Memory (LSTM) and Intelligent Driver Model (IDM) to reflect temporal heterogeneity with high interpretability.
arXiv Detail & Related papers (2024-06-23T15:30:40Z) - i-Rebalance: Personalized Vehicle Repositioning for Supply Demand Balance [11.720716530010323]
We propose i-Rebalance, a personalized vehicle reposition technique with deep reinforcement learning (DRL)
i-Rebalance estimates drivers' decisions on accepting reposition recommendations through an on-field user study involving 99 real drivers.
Evaluation of real-world trajectory data shows that i-Rebalance improves driver acceptance rate by 38.07% and total driver income by 9.97%.
arXiv Detail & Related papers (2024-01-09T08:51:56Z) - Safe Model-Based Multi-Agent Mean-Field Reinforcement Learning [48.667697255912614]
Mean-field reinforcement learning addresses the policy of a representative agent interacting with the infinite population of identical agents.
We propose Safe-M$3$-UCRL, the first model-based mean-field reinforcement learning algorithm that attains safe policies even in the case of unknown transitions.
Our algorithm effectively meets the demand in critical areas while ensuring service accessibility in regions with low demand.
arXiv Detail & Related papers (2023-06-29T15:57:07Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - A Modular and Transferable Reinforcement Learning Framework for the
Fleet Rebalancing Problem [2.299872239734834]
We propose a modular framework for fleet rebalancing based on model-free reinforcement learning (RL)
We formulate RL state and action spaces as distributions over a grid of the operating area, making the framework scalable.
Numerical experiments, using real-world trip and network data, demonstrate that this approach has several distinct advantages over baseline methods.
arXiv Detail & Related papers (2021-05-27T16:32:28Z) - Model-based Multi-agent Policy Optimization with Adaptive Opponent-wise
Rollouts [52.844741540236285]
This paper investigates the model-based methods in multi-agent reinforcement learning (MARL)
We propose a novel decentralized model-based MARL method, named Adaptive Opponent-wise Rollout Policy (AORPO)
arXiv Detail & Related papers (2021-05-07T16:20:22Z) - Equilibrium Inverse Reinforcement Learning for Ride-hailing Vehicle
Network [1.599072005190786]
We formulate the problem of passenger-vehicle matching in a sparsely connected graph.
We propose an algorithm to derive an equilibrium policy in a multi-agent environment.
arXiv Detail & Related papers (2021-02-13T03:18:44Z) - Vehicular Cooperative Perception Through Action Branching and Federated
Reinforcement Learning [101.64598586454571]
A novel framework is proposed to allow reinforcement learning-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs)
A federated RL approach is introduced in order to speed up the training process across vehicles.
Results show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.
arXiv Detail & Related papers (2020-12-07T02:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.