Reinforcement Learning in the Wild: Scalable RL Dispatching Algorithm
Deployed in Ridehailing Marketplace
- URL: http://arxiv.org/abs/2202.05118v1
- Date: Thu, 10 Feb 2022 16:07:17 GMT
- Title: Reinforcement Learning in the Wild: Scalable RL Dispatching Algorithm
Deployed in Ridehailing Marketplace
- Authors: Soheil Sadeghi Eshkevari, Xiaocheng Tang, Zhiwei Qin, Jinhan Mei,
Cheng Zhang, Qianying Meng, Jia Xu
- Abstract summary: This study proposes a real-time dispatching algorithm based on reinforcement learning.
It is deployed online in multiple cities under DiDi's operation for A/B testing and is launched in one of the major international markets.
The deployed algorithm shows over 1.3% improvement in total driver income from A/B testing.
- Score: 12.298997392937876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, a real-time dispatching algorithm based on reinforcement
learning is proposed and for the first time, is deployed in large scale.
Current dispatching methods in ridehailing platforms are dominantly based on
myopic or rule-based non-myopic approaches. Reinforcement learning enables
dispatching policies that are informed of historical data and able to employ
the learned information to optimize returns of expected future trajectories.
Previous studies in this field yielded promising results, yet have left room
for further improvements in terms of performance gain, self-dependency,
transferability, and scalable deployment mechanisms. The present study proposes
a standalone RL-based dispatching solution that is equipped with multiple
mechanisms to ensure robust and efficient on-policy learning and inference
while being adaptable for full-scale deployment. A new form of value updating
based on temporal difference is proposed that is more adapted to the inherent
uncertainty of the problem. For the driver-order assignment, a customized
utility function is proposed that when tuned based on the statistics of the
market, results in remarkable performance improvement and interpretability. In
addition, for reducing the risk of cancellation after drivers' assignment, an
adaptive graph pruning strategy based on the multi-arm bandit problem is
introduced. The method is evaluated using offline simulation with real data and
yields notable performance improvement. In addition, the algorithm is deployed
online in multiple cities under DiDi's operation for A/B testing and is
launched in one of the major international markets as the primary mode of
dispatch. The deployed algorithm shows over 1.3% improvement in total driver
income from A/B testing. In addition, by causal inference analysis, as much as
5.3% improvement in major performance metrics is detected after full-scale
deployment.
Related papers
- Statistically Efficient Variance Reduction with Double Policy Estimation
for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning [53.97273491846883]
We propose DPE: an RL algorithm that blends offline sequence modeling and offline reinforcement learning with Double Policy Estimation.
We validate our method in multiple tasks of OpenAI Gym with D4RL benchmarks.
arXiv Detail & Related papers (2023-08-28T20:46:07Z) - Distillation Policy Optimization [5.439020425819001]
We introduce an actor-critic learning framework that harmonizes two data sources for both evaluation and control.
This framework incorporates variance reduction mechanisms, including a unified advantage estimator (UAE) and a residual baseline.
Our results showcase substantial enhancements in sample efficiency for on-policy algorithms, effectively bridging the gap to the off-policy approaches.
arXiv Detail & Related papers (2023-02-01T15:59:57Z) - A Meta Reinforcement Learning Approach for Predictive Autoscaling in the
Cloud [10.970391043991363]
We propose an end-to-end predictive meta model-based RL algorithm, aiming to optimally allocate resource to maintain a stable CPU utilization level.
Our algorithm not only ensures the predictability and accuracy of the scaling strategy, but also enables the scaling decisions to adapt to the changing workloads with high sample efficiency.
arXiv Detail & Related papers (2022-05-31T13:54:04Z) - A Deep Value-network Based Approach for Multi-Driver Order Dispatching [55.36656442934531]
We propose a deep reinforcement learning based solution for order dispatching.
We conduct large scale online A/B tests on DiDi's ride-dispatching platform.
Results show that CVNet consistently outperforms other recently proposed dispatching methods.
arXiv Detail & Related papers (2021-06-08T16:27:04Z) - MUSBO: Model-based Uncertainty Regularized and Sample Efficient Batch
Optimization for Deployment Constrained Reinforcement Learning [108.79676336281211]
Continuous deployment of new policies for data collection and online learning is either cost ineffective or impractical.
We propose a new algorithmic learning framework called Model-based Uncertainty regularized and Sample Efficient Batch Optimization.
Our framework discovers novel and high quality samples for each deployment to enable efficient data collection.
arXiv Detail & Related papers (2021-02-23T01:30:55Z) - Reinforcement Learning for Datacenter Congestion Control [50.225885814524304]
Successful congestion control algorithms can dramatically improve latency and overall network throughput.
Until today, no such learning-based algorithms have shown practical potential in this domain.
We devise an RL-based algorithm with the aim of generalizing to different configurations of real-world datacenter networks.
We show that this scheme outperforms alternative popular RL approaches, and generalizes to scenarios that were not seen during training.
arXiv Detail & Related papers (2021-02-18T13:49:28Z) - META-Learning Eligibility Traces for More Sample Efficient Temporal
Difference Learning [2.0559497209595823]
We propose a meta-learning method for adjusting the eligibility trace parameter, in a state-dependent manner.
The adaptation is achieved with the help of auxiliary learners that learn distributional information about the update targets online.
We prove that, under some assumptions, the proposed method improves the overall quality of the update targets, by minimizing the overall target error.
arXiv Detail & Related papers (2020-06-16T03:41:07Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Optimizing for the Future in Non-Stationary MDPs [52.373873622008944]
We present a policy gradient algorithm that maximizes a forecast of future performance.
We show that our algorithm, called Prognosticator, is more robust to non-stationarity than two online adaptation techniques.
arXiv Detail & Related papers (2020-05-17T03:41:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.