Multi-Purchase Behavior: Modeling, Estimation and Optimization
- URL: http://arxiv.org/abs/2006.08055v2
- Date: Sat, 5 Aug 2023 18:46:16 GMT
- Title: Multi-Purchase Behavior: Modeling, Estimation and Optimization
- Authors: Theja Tulabandhula, Deeksha Sinha, Saketh Reddy Karra, Prasoon Patidar
- Abstract summary: We present a parsimonious multi-purchase family of choice models called the Bundle-MVL-K family.
We develop a binary search based iterative strategy that efficiently computes optimized recommendations for this model.
- Score: 0.9337154228221861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of modeling purchase of multiple products and utilizing
it to display optimized recommendations for online retailers and e-commerce
platforms.
We present a parsimonious multi-purchase family of choice models called the
Bundle-MVL-K family, and develop a binary search based iterative strategy that
efficiently computes optimized recommendations for this model. We establish the
hardness of computing optimal recommendation sets, and derive several
structural properties of the optimal solution that aid in speeding up
computation. This is one of the first attempts at operationalizing
multi-purchase class of choice models. We show one of the first quantitative
links between modeling multiple purchase behavior and revenue gains. The
efficacy of our modeling and optimization techniques compared to competing
solutions is shown using several real world datasets on multiple metrics such
as model fitness, expected revenue gains and run-time reductions. For example,
the expected revenue benefit of taking multiple purchases into account is
observed to be $\sim5\%$ in relative terms for the Ta Feng and UCI shopping
datasets, when compared to the MNL model for instances with $\sim 1500$
products. Additionally, across $6$ real world datasets, the test log-likelihood
fits of our models are on average $17\%$ better in relative terms. Our work
contributes to the study multi-purchase decisions, analyzing consumer demand
and the retailers optimization problem. The simplicity of our models and the
iterative nature of our optimization technique allows practitioners meet
stringent computational constraints while increasing their revenues in
practical recommendation applications at scale, especially in e-commerce
platforms and other marketplaces.
Related papers
- Decoding-Time Language Model Alignment with Multiple Objectives [88.64776769490732]
Existing methods primarily focus on optimizing LMs for a single reward function, limiting their adaptability to varied objectives.
Here, we propose $textbfmulti-objective decoding (MOD)$, a decoding-time algorithm that outputs the next token from a linear combination of predictions.
We show why existing approaches can be sub-optimal even in natural settings and obtain optimality guarantees for our method.
arXiv Detail & Related papers (2024-06-27T02:46:30Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - Modeling Choice via Self-Attention [8.394221523847325]
We show that our attention-based choice model is a low-optimal generalization of the Halo Multinomial Logit (Halo-MNL) model.
We also establish the first realistic-scale benchmark for choice estimation on real data, conducting an evaluation of existing models.
arXiv Detail & Related papers (2023-11-11T11:13:07Z) - UniMatch: A Unified User-Item Matching Framework for the Multi-purpose
Merchant Marketing [27.459774494479227]
We present a unified user-item matching framework to simultaneously conduct item recommendation and user targeting with just one model.
Our framework results in significant performance gains in comparison with the state-of-the-art methods, with greatly reduced cost on computing resources and daily maintenance.
arXiv Detail & Related papers (2023-07-19T13:49:35Z) - Action-State Dependent Dynamic Model Selection [6.5268245109828005]
A Reinforcement learning algorithm is used to approximate and estimate from the data the optimal solution to a dynamic programming problem.
A typical example is the one of switching between different portfolio models under rebalancing costs.
Using a set of macroeconomic variables and price data, an empirical application shows superior performance to choosing the best portfolio model with hindsight.
arXiv Detail & Related papers (2023-07-07T09:23:14Z) - On Optimal Caching and Model Multiplexing for Large Model Inference [66.50550915522551]
Large Language Models (LLMs) and other large foundation models have achieved noteworthy success, but their size exacerbates existing resource consumption and latency challenges.
We study two approaches for mitigating these challenges: employing a cache to store previous queries and learning a model multiplexer to choose from an ensemble of models for query processing.
arXiv Detail & Related papers (2023-06-03T05:01:51Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - PreSizE: Predicting Size in E-Commerce using Transformers [76.33790223551074]
PreSizE is a novel deep learning framework which utilizes Transformers for accurate size prediction.
We demonstrate that PreSizE is capable of achieving superior prediction performance compared to previous state-of-the-art baselines.
As a proof of concept, we demonstrate that size predictions made by PreSizE can be effectively integrated into an existing production recommender system.
arXiv Detail & Related papers (2021-05-04T15:23:59Z) - Personalizing Performance Regression Models to Black-Box Optimization
Problems [0.755972004983746]
In this work, we propose a personalized regression approach for numerical optimization problems.
We also investigate the impact of selecting not a single regression model per problem, but personalized ensembles.
We test our approach on predicting the performance of numerical optimizations on the BBOB benchmark collection.
arXiv Detail & Related papers (2021-04-22T11:47:47Z) - Consumer Behaviour in Retail: Next Logical Purchase using Deep Neural
Network [0.0]
Accurate prediction of consumer purchase pattern enables better inventory planning and efficient personalized marketing strategies.
Nerve network architectures like Multi Layer Perceptron, Long Short Term Memory (LSTM), Temporal Convolutional Networks (TCN) and TCN-LSTM bring over ML models like Xgboost and RandomForest.
arXiv Detail & Related papers (2020-10-14T11:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.