DeliverAI: Reinforcement Learning Based Distributed Path-Sharing Network
for Food Deliveries
- URL: http://arxiv.org/abs/2311.02017v2
- Date: Sun, 11 Feb 2024 06:29:38 GMT
- Title: DeliverAI: Reinforcement Learning Based Distributed Path-Sharing Network
for Food Deliveries
- Authors: Ashman Mehra, Snehanshu Saha, Vaskar Raychoudhury, Archana Mathur
- Abstract summary: Existing food delivery methods are sub-optimal because each delivery is individually optimized to go directly from the producer to the consumer via the shortest time path.
We propose DeliverAI - a reinforcement learning-based path-sharing algorithm.
Our results show that DeliverAI can reduce the delivery fleet size by 12%, the distance traveled by 13%, and achieve 50% higher fleet utilization compared to the baselines.
- Score: 1.474723404975345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Delivery of items from the producer to the consumer has experienced
significant growth over the past decade and has been greatly fueled by the
recent pandemic. Amazon Fresh, Shopify, UberEats, InstaCart, and DoorDash are
rapidly growing and are sharing the same business model of consumer items or
food delivery. Existing food delivery methods are sub-optimal because each
delivery is individually optimized to go directly from the producer to the
consumer via the shortest time path. We observe a significant scope for
reducing the costs associated with completing deliveries under the current
model. We model our food delivery problem as a multi-objective optimization,
where consumer satisfaction and delivery costs, both, need to be optimized.
Taking inspiration from the success of ride-sharing in the taxi industry, we
propose DeliverAI - a reinforcement learning-based path-sharing algorithm.
Unlike previous attempts for path-sharing, DeliverAI can provide real-time,
time-efficient decision-making using a Reinforcement learning-enabled agent
system. Our novel agent interaction scheme leverages path-sharing among
deliveries to reduce the total distance traveled while keeping the delivery
completion time under check. We generate and test our methodology vigorously on
a simulation setup using real data from the city of Chicago. Our results show
that DeliverAI can reduce the delivery fleet size by 12\%, the distance
traveled by 13%, and achieve 50% higher fleet utilization compared to the
baselines.
Related papers
- Deep Reinforcement Learning for Traveling Purchaser Problems [63.37136587778153]
The traveling purchaser problem (TPP) is an important optimization problem with broad applications.
We propose a novel approach based on deep reinforcement learning (DRL), which addresses route construction and purchase planning separately.
By introducing a meta-learning strategy, the policy network can be trained stably on large-sized TPP instances.
arXiv Detail & Related papers (2024-04-03T05:32:10Z) - On-Time Delivery in Crowdshipping Systems: An Agent-Based Approach Using
Streaming Data [0.7865191493201839]
We present an agent-based approach to on-time parcel delivery with crowds.
Our system performs data stream processing on the couriers' smartphone sensor data to predict delivery delays.
Our experiments show that through accurate delay predictions and purposeful task transfers many delays can be prevented.
arXiv Detail & Related papers (2024-01-22T16:45:15Z) - Fair collaborative vehicle routing: A deep multi-agent reinforcement
learning approach [49.00137468773683]
Collaborative vehicle routing occurs when carriers collaborate through sharing their transportation requests and performing transportation requests on behalf of each other.
Traditional game theoretic solution concepts are expensive to calculate as the characteristic function scales exponentially with the number of agents.
We propose to model this problem as a coalitional bargaining game solved using deep multi-agent reinforcement learning.
arXiv Detail & Related papers (2023-10-26T15:42:29Z) - Approaching sales forecasting using recurrent neural networks and
transformers [57.43518732385863]
We develop three alternatives to tackle the problem of forecasting the customer sales at day/store/item level using deep learning techniques.
Our empirical results show how good performance can be achieved by using a simple sequence to sequence architecture with minimal data preprocessing effort.
The proposed solution achieves a RMSLE of around 0.54, which is competitive with other more specific solutions to the problem proposed in the Kaggle competition.
arXiv Detail & Related papers (2022-04-16T12:03:52Z) - A Deep Reinforcement Learning Approach for Constrained Online Logistics
Route Assignment [4.367543599338385]
It is crucial for the logistics industry on how to assign a candidate logistics route for each shipping parcel properly.
This online route-assignment problem can be viewed as a constrained online decision-making problem.
We develop a model-free DRL approach named PPO-RA, in which Proximal Policy Optimization (PPO) is improved with dedicated techniques to address the challenges for route assignment (RA)
arXiv Detail & Related papers (2021-09-08T07:27:39Z) - A Deep Value-network Based Approach for Multi-Driver Order Dispatching [55.36656442934531]
We propose a deep reinforcement learning based solution for order dispatching.
We conduct large scale online A/B tests on DiDi's ride-dispatching platform.
Results show that CVNet consistently outperforms other recently proposed dispatching methods.
arXiv Detail & Related papers (2021-06-08T16:27:04Z) - Learning to Optimize Industry-Scale Dynamic Pickup and Delivery Problems [17.076557377480444]
The Dynamic Pickup and Delivery Problem (D PDP) is aimed at dynamically scheduling vehicles among multiple sites in order to minimize the cost when delivery orders are not known a priori.
We propose a data-driven approach, Spatial-Temporal Aided Double Deep Graph Network (ST-DDGN), to solve industry-scale D PDP.
Our method is entirely data driven and thus adaptive, i.e., the relational representation of adjacent vehicles can be learned and corrected by ST-DDGN from data periodically.
arXiv Detail & Related papers (2021-05-27T01:16:00Z) - Value Function is All You Need: A Unified Learning Framework for Ride
Hailing Platforms [57.21078336887961]
Large ride-hailing platforms, such as DiDi, Uber and Lyft, connect tens of thousands of vehicles in a city to millions of ride demands throughout the day.
We propose a unified value-based dynamic learning framework (V1D3) for tackling both tasks.
arXiv Detail & Related papers (2021-05-18T19:22:24Z) - Mathematical simulation of package delivery optimization using a
combination of carriers [0.0]
Authors analyzed and proposed a solution for the problem of cost optimization for packages delivery for long-distance deliveries using a combination of paths delivered by supplier fleets, worldwide and local carriers.
Experiment is based on data sources of the United States companies using a wide range of carriers for delivery services.
arXiv Detail & Related papers (2020-11-02T18:44:04Z) - Real-time and Large-scale Fleet Allocation of Autonomous Taxis: A Case
Study in New York Manhattan Island [14.501650948647324]
Traditional models fail to efficiently allocate the available fleet to deal with the imbalance of supply (autonomous taxis) and demand (trips)
We employ a Constrained Multi-agent Markov Decision Processes (CMMDP) to model fleet allocation decisions.
We also leverage a Column Generation algorithm to guarantee the efficiency and optimality in a large scale.
arXiv Detail & Related papers (2020-09-06T16:00:15Z) - Congestion-aware Evacuation Routing using Augmented Reality Devices [96.68280427555808]
We present a congestion-aware routing solution for indoor evacuation, which produces real-time individual-customized evacuation routes among multiple destinations.
A population density map, obtained on-the-fly by aggregating locations of evacuees from user-end Augmented Reality (AR) devices, is used to model the congestion distribution inside a building.
arXiv Detail & Related papers (2020-04-25T22:54:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.