Dynamic Demand Management for Parcel Lockers
- URL: http://arxiv.org/abs/2409.05061v2
- Date: Thu, 12 Sep 2024 08:19:32 GMT
- Title: Dynamic Demand Management for Parcel Lockers
- Authors: Daniela Sailer, Robert Klein, Claudius Steinhardt,
- Abstract summary: We develop a solution framework that orchestrates algorithmic techniques rooted in Sequential Decision Analytics and Reinforcement Learning.
Our innovative approach to combine these techniques enables us to address the strong interrelations between the two decision types.
Our computational study shows that our method outperforms a myopic benchmark by 13.7% and an industry-inspired policy by 12.6%.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In pursuit of a more sustainable and cost-efficient last mile, parcel lockers have gained a firm foothold in the parcel delivery landscape. To fully exploit their potential and simultaneously ensure customer satisfaction, successful management of the locker's limited capacity is crucial. This is challenging as future delivery requests and pickup times are stochastic from the provider's perspective. In response, we propose to dynamically control whether the locker is presented as an available delivery option to each incoming customer with the goal of maximizing the number of served requests weighted by their priority. Additionally, we take different compartment sizes into account, which entails a second type of decision as parcels scheduled for delivery must be allocated. We formalize the problem as an infinite-horizon sequential decision problem and find that exact methods are intractable due to the curses of dimensionality. In light of this, we develop a solution framework that orchestrates multiple algorithmic techniques rooted in Sequential Decision Analytics and Reinforcement Learning, namely cost function approximation and an offline trained parametric value function approximation together with a truncated online rollout. Our innovative approach to combine these techniques enables us to address the strong interrelations between the two decision types. As a general methodological contribution, we enhance the training of our value function approximation with a modified version of experience replay that enforces structure in the value function. Our computational study shows that our method outperforms a myopic benchmark by 13.7% and an industry-inspired policy by 12.6%.
Related papers
- Dual-Agent Deep Reinforcement Learning for Dynamic Pricing and Replenishment [15.273192037219077]
We study the dynamic pricing and replenishment problems under inconsistent decision frequencies.
We integrate a decision tree-based machine learning approach, trained on comprehensive market data.
In this approach, two agents handle pricing and inventory and are updated on different scales.
arXiv Detail & Related papers (2024-10-28T15:12:04Z) - Scalarisation-based risk concepts for robust multi-objective optimisation [4.12484724941528]
We study the multi-objective case of this problem.
We identify that the majority of all robust multi-objective algorithms rely on two key operations: robustification and scalarisation.
As these operations are not necessarily commutative, the order that they are performed in has an impact on the resulting solutions.
arXiv Detail & Related papers (2024-05-16T16:11:00Z) - Resilient Constrained Reinforcement Learning [87.4374430686956]
We study a class of constrained reinforcement learning (RL) problems in which multiple constraint specifications are not identified before study.
It is challenging to identify appropriate constraint specifications due to the undefined trade-off between the reward training objective and the constraint satisfaction.
We propose a new constrained RL approach that searches for policy and constraint specifications together.
arXiv Detail & Related papers (2023-12-28T18:28:23Z) - Achieving Fairness in Multi-Agent Markov Decision Processes Using
Reinforcement Learning [30.605881670761853]
We propose a Reinforcement Learning approach to achieve fairness in finite-horizon episodic MDPs.
We show that such an approach achieves sub-linear regret in terms of the number of episodes.
arXiv Detail & Related papers (2023-06-01T03:43:53Z) - Byzantine-Robust Online and Offline Distributed Reinforcement Learning [60.970950468309056]
We consider a distributed reinforcement learning setting where multiple agents explore the environment and communicate their experiences through a central server.
$alpha$-fraction of agents are adversarial and can report arbitrary fake information.
We seek to identify a near-optimal policy for the underlying Markov decision process in the presence of these adversarial agents.
arXiv Detail & Related papers (2022-06-01T00:44:53Z) - Reinforcement Learning with a Terminator [80.34572413850186]
We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds.
We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret.
arXiv Detail & Related papers (2022-05-30T18:40:28Z) - Online Learning with Knapsacks: the Best of Both Worlds [54.28273783164608]
We casting online learning problems in which a decision maker wants to maximize their expected reward without violating a finite set of $m$m resource constraints.
Our framework allows the decision maker to handle its evidence flexibility and costoretic functions.
arXiv Detail & Related papers (2022-02-28T12:10:48Z) - Off-line approximate dynamic programming for the vehicle routing problem
with stochastic customers and demands via decentralized decision-making [0.0]
This paper studies a variant of the vehicle routing problem (VRP) where both customer locations and demands are uncertain.
The objective is to maximize the served demands while fulfilling vehicle capacities and time restrictions.
We develop a Q-learning algorithm featuring state-of-the-art acceleration techniques such as Replay Memory and Double Q Network.
arXiv Detail & Related papers (2021-09-21T14:28:09Z) - Outcome-Driven Reinforcement Learning via Variational Inference [95.82770132618862]
We discuss a new perspective on reinforcement learning, recasting it as the problem of inferring actions that achieve desired outcomes, rather than a problem of maximizing rewards.
To solve the resulting outcome-directed inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function.
We empirically demonstrate that this method eliminates the need to design reward functions and leads to effective goal-directed behaviors.
arXiv Detail & Related papers (2021-04-20T18:16:21Z) - Efficient Empowerment Estimation for Unsupervised Stabilization [75.32013242448151]
empowerment principle enables unsupervised stabilization of dynamical systems at upright positions.
We propose an alternative solution based on a trainable representation of a dynamical system as a Gaussian channel.
We show that our method has a lower sample complexity, is more stable in training, possesses the essential properties of the empowerment function, and allows estimation of empowerment from images.
arXiv Detail & Related papers (2020-07-14T21:10:16Z) - Solving the Order Batching and Sequencing Problem using Deep
Reinforcement Learning [2.4565068569913384]
We present a Deep Reinforcement Learning (DRL) approach for deciding how and when orders should be batched and picked in a warehouse to minimize the number of tardy orders.
In particular, the technique facilitates making decisions on whether an order should be picked individually (pick-by-order) or picked in a batch with other orders (pick-by-batch) and if so with which other orders.
arXiv Detail & Related papers (2020-06-16T20:40:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.