Playing hide and seek: tackling in-store picking operations while
improving customer experience
- URL: http://arxiv.org/abs/2301.02142v1
- Date: Thu, 5 Jan 2023 16:35:17 GMT
- Title: Playing hide and seek: tackling in-store picking operations while
improving customer experience
- Authors: F\'abio Neves-Moreira and Pedro Amorim
- Abstract summary: We formalize a new problem called Dynamic In-store Picker Problem routing (diPRP)
In this relevant problem - diPRP - a picker tries to pick online orders while minimizing customer encounters.
Our work suggests that retailers should be able to scale the in-store picking of online orders without jeopardizing the experience of offline customers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The evolution of the retail business presents new challenges and raises
pivotal questions on how to reinvent stores and supply chains to meet the
growing demand of the online channel. One of the recent measures adopted by
omnichannel retailers is to address the growth of online sales using in-store
picking, which allows serving online orders using existing assets. However, it
comes with the downside of harming the offline customer experience. To achieve
picking policies adapted to the dynamic customer flows of a retail store, we
formalize a new problem called Dynamic In-store Picker Routing Problem (diPRP).
In this relevant problem - diPRP - a picker tries to pick online orders while
minimizing customer encounters. We model the problem as a Markov Decision
Process (MDP) and solve it using a hybrid solution approach comprising
mathematical programming and reinforcement learning components. Computational
experiments on synthetic instances suggest that the algorithm converges to
efficient policies. Furthermore, we apply our approach in the context of a
large European retailer to assess the results of the proposed policies
regarding the number of orders picked and customers encountered. Our work
suggests that retailers should be able to scale the in-store picking of online
orders without jeopardizing the experience of offline customers. The policies
learned using the proposed solution approach reduced the number of customer
encounters by more than 50% when compared to policies solely focused on picking
orders. Thus, to pursue omnichannel strategies that adequately trade-off
operational efficiency and customer experience, retailers cannot rely on actual
simplistic picking strategies, such as choosing the shortest possible route.
Related papers
- A Primal-Dual Online Learning Approach for Dynamic Pricing of Sequentially Displayed Complementary Items under Sale Constraints [54.46126953873298]
We address the problem of dynamically pricing complementary items that are sequentially displayed to customers.
Coherent pricing policies for complementary items are essential because optimizing the pricing of each item individually is ineffective.
We empirically evaluate our approach using synthetic settings randomly generated from real-world data, and compare its performance in terms of constraints violation and regret.
arXiv Detail & Related papers (2024-07-08T09:55:31Z) - Actions Speak What You Want: Provably Sample-Efficient Reinforcement
Learning of the Quantal Stackelberg Equilibrium from Strategic Feedbacks [94.07688076435818]
We study reinforcement learning for learning a Quantal Stackelberg Equilibrium (QSE) in an episodic Markov game with a leader-follower structure.
Our algorithms are based on (i) learning the quantal response model via maximum likelihood estimation and (ii) model-free or model-based RL for solving the leader's decision making problem.
arXiv Detail & Related papers (2023-07-26T10:24:17Z) - Learning to Price Supply Chain Contracts against a Learning Retailer [3.7814216736076434]
We study the supply chain contract design problem faced by a data-driven supplier.
Both the supplier and the retailer are uncertain about the market demand.
We show that our pricing policies lead to sublinear regret bounds in all these cases.
arXiv Detail & Related papers (2022-11-02T04:00:47Z) - No-Regret Learning in Two-Echelon Supply Chain with Unknown Demand
Distribution [48.27759561064771]
We consider the two-echelon supply chain model introduced in [Cachon and Zipkin, 1999] under two different settings.
We design algorithms that achieve favorable guarantees for both regret and convergence to the optimal inventory decision in both settings.
Our algorithms are based on Online Gradient Descent and Online Newton Step, together with several new ingredients specifically designed for our problem.
arXiv Detail & Related papers (2022-10-23T08:45:39Z) - MNL-Bandits under Inventory and Limited Switches Constraints [38.960764902819434]
We develop an efficient UCB-like algorithm to optimize the assortments while learning customers' choices from data.
We prove that our algorithm can achieve a sub-linear regret bound $tildeOleft(T1-alpha/2right)$ if $O(Talpha)$ switches are allowed.
arXiv Detail & Related papers (2022-04-22T16:02:27Z) - Approaching sales forecasting using recurrent neural networks and
transformers [57.43518732385863]
We develop three alternatives to tackle the problem of forecasting the customer sales at day/store/item level using deep learning techniques.
Our empirical results show how good performance can be achieved by using a simple sequence to sequence architecture with minimal data preprocessing effort.
The proposed solution achieves a RMSLE of around 0.54, which is competitive with other more specific solutions to the problem proposed in the Kaggle competition.
arXiv Detail & Related papers (2022-04-16T12:03:52Z) - Characterization of Frequent Online Shoppers using Statistical Learning
with Sparsity [54.26540039514418]
This work reports a method to learn the shopping preferences of frequent shoppers to an online gift store by combining ideas from retail analytics and statistical learning with sparsity.
arXiv Detail & Related papers (2021-11-11T05:36:39Z) - OPAM: Online Purchasing-behavior Analysis using Machine learning [0.8121462458089141]
We present a customer purchasing behavior analysis system using supervised, unsupervised and semi-supervised learning methods.
The proposed system analyzes session and user-journey level purchasing behaviors to identify customer categories/clusters.
arXiv Detail & Related papers (2021-02-02T17:29:52Z) - Universal Trading for Order Execution with Oracle Policy Distillation [99.57416828489568]
We propose a novel universal trading policy optimization framework to bridge the gap between the noisy yet imperfect market states and the optimal action sequences for order execution.
We show that our framework can better guide the learning of the common policy towards practically optimal execution by an oracle teacher with perfect information.
arXiv Detail & Related papers (2021-01-28T05:52:18Z) - Solving the Order Batching and Sequencing Problem using Deep
Reinforcement Learning [2.4565068569913384]
We present a Deep Reinforcement Learning (DRL) approach for deciding how and when orders should be batched and picked in a warehouse to minimize the number of tardy orders.
In particular, the technique facilitates making decisions on whether an order should be picked individually (pick-by-order) or picked in a batch with other orders (pick-by-batch) and if so with which other orders.
arXiv Detail & Related papers (2020-06-16T20:40:41Z) - Interpretable Personalization via Policy Learning with Linear Decision
Boundaries [14.817218449140338]
effective personalization of goods and services has become a core business for companies to improve revenues and maintain a competitive edge.
This paper studies the personalization problem through the lens of policy learning.
We propose a class of policies with linear decision boundaries and propose learning algorithms using tools from causal inference.
arXiv Detail & Related papers (2020-03-17T05:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.