A methodological framework for Resilience as a Service (RaaS) in multimodal urban transportation networks
- URL: http://arxiv.org/abs/2408.17233v1
- Date: Fri, 30 Aug 2024 12:22:34 GMT
- Title: A methodological framework for Resilience as a Service (RaaS) in multimodal urban transportation networks
- Authors: Sara Jaber, Mostafa Ameli, S. M. Hassan Mahdavi, Neila Bhouri,
- Abstract summary: This study aims to explore the management of public transport disruptions through resilience as a service strategies.
It develops an optimization model to effectively allocate resources and minimize the cost for operators and passengers.
The proposed model is applied to a case study in the Ile de France region, Paris and suburbs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Public transportation systems are experiencing an increase in commuter traffic. This increase underscores the need for resilience strategies to manage unexpected service disruptions, ensuring rapid and effective responses that minimize adverse effects on stakeholders and enhance the system's ability to maintain essential functions and recover quickly. This study aims to explore the management of public transport disruptions through resilience as a service (RaaS) strategies, developing an optimization model to effectively allocate resources and minimize the cost for operators and passengers. The proposed model includes multiple transportation options, such as buses, taxis, and automated vans, and evaluates them as bridging alternatives to rail-disrupted services based on factors such as their availability, capacity, speed, and proximity to the disrupted station. This ensures that the most suitable vehicles are deployed to maintain service continuity. Applied to a case study in the Ile de France region, Paris and suburbs, complemented by a microscopic simulation, the model is compared to existing solutions such as bus bridging and reserve fleets. The results highlight the model's performance in minimizing costs and enhancing stakeholder satisfaction, optimizing transport management during disruptions.
Related papers
- GPT-Augmented Reinforcement Learning with Intelligent Control for Vehicle Dispatching [82.19172267487998]
GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.
This paper introduces GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.
arXiv Detail & Related papers (2024-08-19T08:23:38Z) - A Learning-based Incentive Mechanism for Mobile AIGC Service in Decentralized Internet of Vehicles [49.86094523878003]
We propose a decentralized incentive mechanism for mobile AIGC service allocation.
We employ multi-agent deep reinforcement learning to find the balance between the supply of AIGC services on RSUs and user demand for services within the IoV context.
arXiv Detail & Related papers (2024-03-29T12:46:07Z) - Forecasting and Mitigating Disruptions in Public Bus Transit Services [7.948662269574215]
Public transportation systems often suffer from unexpected fluctuations in demand and disruptions, such as mechanical failures and medical emergencies.
These fluctuations and disruptions lead to delays and overcrowding, which are detrimental to the passengers' experience and to the overall performance of the transit service.
To proactively mitigate such events, many transit agencies station substitute (reserve) vehicles throughout their service areas, which they can dispatch to augment or replace vehicles on routes that suffer overcrowding or disruption.
However, determining the optimal locations where substitute vehicles should be stationed is a challenging problem due to the randomness of disruptions and due to the nature of selecting locations across a city.
arXiv Detail & Related papers (2024-03-06T22:06:21Z) - TranDRL: A Transformer-Driven Deep Reinforcement Learning Enabled Prescriptive Maintenance Framework [58.474610046294856]
Industrial systems demand reliable predictive maintenance strategies to enhance operational efficiency and reduce downtime.
This paper introduces an integrated framework that leverages the capabilities of the Transformer model-based neural networks and deep reinforcement learning (DRL) algorithms to optimize system maintenance actions.
arXiv Detail & Related papers (2023-09-29T02:27:54Z) - Vehicle Dispatching and Routing of On-Demand Intercity Ride-Pooling Services: A Multi-Agent Hierarchical Reinforcement Learning Approach [4.44413304473005]
Intercity ride-pooling service exhibits considerable potential in upgrading traditional intercity bus services.
Online operations suffer the inherent complexities due to the coupling of vehicle resource allocation among cities and pooled-ride vehicle routing.
This study proposes a two-level framework designed to facilitate online fleet management.
arXiv Detail & Related papers (2023-07-13T13:31:01Z) - Safe Model-Based Multi-Agent Mean-Field Reinforcement Learning [48.667697255912614]
Mean-field reinforcement learning addresses the policy of a representative agent interacting with the infinite population of identical agents.
We propose Safe-M$3$-UCRL, the first model-based mean-field reinforcement learning algorithm that attains safe policies even in the case of unknown transitions.
Our algorithm effectively meets the demand in critical areas while ensuring service accessibility in regions with low demand.
arXiv Detail & Related papers (2023-06-29T15:57:07Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - A Modular and Transferable Reinforcement Learning Framework for the
Fleet Rebalancing Problem [2.299872239734834]
We propose a modular framework for fleet rebalancing based on model-free reinforcement learning (RL)
We formulate RL state and action spaces as distributions over a grid of the operating area, making the framework scalable.
Numerical experiments, using real-world trip and network data, demonstrate that this approach has several distinct advantages over baseline methods.
arXiv Detail & Related papers (2021-05-27T16:32:28Z) - Real-time and Large-scale Fleet Allocation of Autonomous Taxis: A Case
Study in New York Manhattan Island [14.501650948647324]
Traditional models fail to efficiently allocate the available fleet to deal with the imbalance of supply (autonomous taxis) and demand (trips)
We employ a Constrained Multi-agent Markov Decision Processes (CMMDP) to model fleet allocation decisions.
We also leverage a Column Generation algorithm to guarantee the efficiency and optimality in a large scale.
arXiv Detail & Related papers (2020-09-06T16:00:15Z) - FlexPool: A Distributed Model-Free Deep Reinforcement Learning Algorithm
for Joint Passengers & Goods Transportation [36.989179280016586]
This paper considers combining passenger transportation with goods delivery to improve vehicle-based transportation.
We propose FlexPool, a distributed model-free deep reinforcement learning algorithm that jointly serves passengers & goods workloads.
We show that FlexPool achieves 30% higher fleet utilization and 35% higher fuel efficiency in comparison to model-free approaches.
arXiv Detail & Related papers (2020-07-27T17:25:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.