Multiagent Reinforcement Learning for Autonomous Routing and Pickup
Problem with Adaptation to Variable Demand
- URL: http://arxiv.org/abs/2211.14983v2
- Date: Tue, 21 Mar 2023 15:18:48 GMT
- Title: Multiagent Reinforcement Learning for Autonomous Routing and Pickup
Problem with Adaptation to Variable Demand
- Authors: Daniel Garces, Sushmita Bhattacharya, Stephanie Gil, Dimitri Bertsekas
- Abstract summary: We derive a learning framework to generate routing/pickup policies for a fleet of autonomous vehicles tasked with appearing requests on a city map.
We focus on policies that give rise to coordination amongst the vehicles, thereby reducing wait times for servicing requests.
We propose a mechanism for switching the originally trained offline approximation when the current demand is outside the original validity region.
- Score: 1.8505047763172104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We derive a learning framework to generate routing/pickup policies for a
fleet of autonomous vehicles tasked with servicing stochastically appearing
requests on a city map. We focus on policies that 1) give rise to coordination
amongst the vehicles, thereby reducing wait times for servicing requests, 2)
are non-myopic, and consider a-priori potential future requests, 3) can adapt
to changes in the underlying demand distribution. Specifically, we are
interested in policies that are adaptive to fluctuations of actual demand
conditions in urban environments, such as on-peak vs. off-peak hours. We
achieve this through a combination of (i) an online play algorithm that
improves the performance of an offline-trained policy, and (ii) an offline
approximation scheme that allows for adapting to changes in the underlying
demand model. In particular, we achieve adaptivity of our learned policy to
different demand distributions by quantifying a region of validity using the
q-valid radius of a Wasserstein Ambiguity Set. We propose a mechanism for
switching the originally trained offline approximation when the current demand
is outside the original validity region. In this case, we propose to use an
offline architecture, trained on a historical demand model that is closer to
the current demand in terms of Wasserstein distance. We learn routing and
pickup policies over real taxicab requests in San Francisco with high
variability between on-peak and off-peak hours, demonstrating the ability of
our method to adapt to real fluctuation in demand distributions. Our numerical
results demonstrate that our method outperforms alternative rollout-based
reinforcement learning schemes, as well as other classical methods from
operations research.
Related papers
- Projected Off-Policy Q-Learning (POP-QL) for Stabilizing Offline
Reinforcement Learning [57.83919813698673]
Projected Off-Policy Q-Learning (POP-QL) is a novel actor-critic algorithm that simultaneously reweights off-policy samples and constrains the policy to prevent divergence and reduce value-approximation error.
In our experiments, POP-QL not only shows competitive performance on standard benchmarks, but also out-performs competing methods in tasks where the data-collection policy is significantly sub-optimal.
arXiv Detail & Related papers (2023-11-25T00:30:58Z) - Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online
Reinforcement Learning [71.02384943570372]
Family Offline-to-Online RL (FamO2O) is a framework that empowers existing algorithms to determine state-adaptive improvement-constraint balances.
FamO2O offers a statistically significant improvement over various existing methods, achieving state-of-the-art performance on the D4RL benchmark.
arXiv Detail & Related papers (2023-10-27T08:30:54Z) - Action-Quantized Offline Reinforcement Learning for Robotic Skill
Learning [68.16998247593209]
offline reinforcement learning (RL) paradigm provides recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.
In this paper, we propose an adaptive scheme for action quantization.
We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme.
arXiv Detail & Related papers (2023-10-18T06:07:10Z) - An Online Approach to Solve the Dynamic Vehicle Routing Problem with
Stochastic Trip Requests for Paratransit Services [5.649212162857776]
We propose a fully online approach to solve the dynamic vehicle routing problem (DVRP)
It is difficult to batch paratransit requests together as they are temporally sparse.
We use Monte Carlo tree search to evaluate actions for any given state.
arXiv Detail & Related papers (2022-03-28T22:15:52Z) - H-TD2: Hybrid Temporal Difference Learning for Adaptive Urban Taxi
Dispatch [9.35511513240868]
H-TD2 is a model-free, adaptive decision-making algorithm to coordinate a large fleet of automated taxis in a dynamic urban environment.
We derive a regret bound and design the trigger condition between the two behaviors to explicitly control the trade-off between computational complexity and the individual taxi policy's bounded sub-optimality.
Unlike recent reinforcement learning dispatch methods, this policy estimation is adaptive and robust to out-of-training domain events.
arXiv Detail & Related papers (2021-05-05T15:42:31Z) - AdaPool: A Diurnal-Adaptive Fleet Management Framework using Model-Free
Deep Reinforcement Learning and Change Point Detection [34.77250498401055]
This paper introduces an adaptive model-free deep reinforcement approach that can recognize and adapt to the diurnal patterns in the ride-sharing environment with car-pooling.
In addition to the adaptation logic in dispatching, this paper also proposes a dynamic, demand-aware vehicle-passenger matching and route planning framework.
arXiv Detail & Related papers (2021-04-01T02:14:01Z) - MUSBO: Model-based Uncertainty Regularized and Sample Efficient Batch
Optimization for Deployment Constrained Reinforcement Learning [108.79676336281211]
Continuous deployment of new policies for data collection and online learning is either cost ineffective or impractical.
We propose a new algorithmic learning framework called Model-based Uncertainty regularized and Sample Efficient Batch Optimization.
Our framework discovers novel and high quality samples for each deployment to enable efficient data collection.
arXiv Detail & Related papers (2021-02-23T01:30:55Z) - Equilibrium Inverse Reinforcement Learning for Ride-hailing Vehicle
Network [1.599072005190786]
We formulate the problem of passenger-vehicle matching in a sparsely connected graph.
We propose an algorithm to derive an equilibrium policy in a multi-agent environment.
arXiv Detail & Related papers (2021-02-13T03:18:44Z) - Non-Stationary Off-Policy Optimization [50.41335279896062]
We study the novel problem of off-policy optimization in piecewise-stationary contextual bandits.
In the offline learning phase, we partition logged data into categorical latent states and learn a near-optimal sub-policy for each state.
In the online deployment phase, we adaptively switch between the learned sub-policies based on their performance.
arXiv Detail & Related papers (2020-06-15T09:16:09Z) - AI-based Resource Allocation: Reinforcement Learning for Adaptive
Auto-scaling in Serverless Environments [0.0]
Serverless computing has emerged as a compelling new paradigm of cloud computing models in recent years.
A common approach among both commercial and open source serverless computing platforms is workload-based auto-scaling.
In this paper we investigate the applicability of a reinforcement learning approach to request-based auto-scaling in a serverless framework.
arXiv Detail & Related papers (2020-05-29T06:18:39Z) - MOPO: Model-based Offline Policy Optimization [183.6449600580806]
offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data.
We show that an existing model-based RL algorithm already produces significant gains in the offline setting.
We propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics.
arXiv Detail & Related papers (2020-05-27T08:46:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.