Multi-Purchase Behavior: Modeling, Estimation and Optimization
- URL: http://arxiv.org/abs/2006.08055v2
- Date: Sat, 5 Aug 2023 18:46:16 GMT
- Title: Multi-Purchase Behavior: Modeling, Estimation and Optimization
- Authors: Theja Tulabandhula, Deeksha Sinha, Saketh Reddy Karra, Prasoon Patidar
- Abstract summary: We present a parsimonious multi-purchase family of choice models called the Bundle-MVL-K family.
We develop a binary search based iterative strategy that efficiently computes optimized recommendations for this model.
- Score: 0.9337154228221861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of modeling purchase of multiple products and utilizing
it to display optimized recommendations for online retailers and e-commerce
platforms.
We present a parsimonious multi-purchase family of choice models called the
Bundle-MVL-K family, and develop a binary search based iterative strategy that
efficiently computes optimized recommendations for this model. We establish the
hardness of computing optimal recommendation sets, and derive several
structural properties of the optimal solution that aid in speeding up
computation. This is one of the first attempts at operationalizing
multi-purchase class of choice models. We show one of the first quantitative
links between modeling multiple purchase behavior and revenue gains. The
efficacy of our modeling and optimization techniques compared to competing
solutions is shown using several real world datasets on multiple metrics such
as model fitness, expected revenue gains and run-time reductions. For example,
the expected revenue benefit of taking multiple purchases into account is
observed to be $\sim5\%$ in relative terms for the Ta Feng and UCI shopping
datasets, when compared to the MNL model for instances with $\sim 1500$
products. Additionally, across $6$ real world datasets, the test log-likelihood
fits of our models are on average $17\%$ better in relative terms. Our work
contributes to the study multi-purchase decisions, analyzing consumer demand
and the retailers optimization problem. The simplicity of our models and the
iterative nature of our optimization technique allows practitioners meet
stringent computational constraints while increasing their revenues in
practical recommendation applications at scale, especially in e-commerce
platforms and other marketplaces.
Related papers
- $f$-PO: Generalizing Preference Optimization with $f$-divergence Minimization [91.43730624072226]
$f$-PO is a novel framework that generalizes and extends existing approaches.
We conduct experiments on state-of-the-art language models using benchmark datasets.
arXiv Detail & Related papers (2024-10-29T02:11:45Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - Modeling Choice via Self-Attention [8.394221523847325]
We show that our attention-based choice model is a low-optimal generalization of the Halo Multinomial Logit (Halo-MNL) model.
We also establish the first realistic-scale benchmark for choice estimation on real data, conducting an evaluation of existing models.
arXiv Detail & Related papers (2023-11-11T11:13:07Z) - UniMatch: A Unified User-Item Matching Framework for the Multi-purpose
Merchant Marketing [27.459774494479227]
We present a unified user-item matching framework to simultaneously conduct item recommendation and user targeting with just one model.
Our framework results in significant performance gains in comparison with the state-of-the-art methods, with greatly reduced cost on computing resources and daily maintenance.
arXiv Detail & Related papers (2023-07-19T13:49:35Z) - Action-State Dependent Dynamic Model Selection [6.5268245109828005]
A Reinforcement learning algorithm is used to approximate and estimate from the data the optimal solution to a dynamic programming problem.
A typical example is the one of switching between different portfolio models under rebalancing costs.
Using a set of macroeconomic variables and price data, an empirical application shows superior performance to choosing the best portfolio model with hindsight.
arXiv Detail & Related papers (2023-07-07T09:23:14Z) - On Optimal Caching and Model Multiplexing for Large Model Inference [66.50550915522551]
Large Language Models (LLMs) and other large foundation models have achieved noteworthy success, but their size exacerbates existing resource consumption and latency challenges.
We study two approaches for mitigating these challenges: employing a cache to store previous queries and learning a model multiplexer to choose from an ensemble of models for query processing.
arXiv Detail & Related papers (2023-06-03T05:01:51Z) - PreSizE: Predicting Size in E-Commerce using Transformers [76.33790223551074]
PreSizE is a novel deep learning framework which utilizes Transformers for accurate size prediction.
We demonstrate that PreSizE is capable of achieving superior prediction performance compared to previous state-of-the-art baselines.
As a proof of concept, we demonstrate that size predictions made by PreSizE can be effectively integrated into an existing production recommender system.
arXiv Detail & Related papers (2021-05-04T15:23:59Z) - Personalizing Performance Regression Models to Black-Box Optimization
Problems [0.755972004983746]
In this work, we propose a personalized regression approach for numerical optimization problems.
We also investigate the impact of selecting not a single regression model per problem, but personalized ensembles.
We test our approach on predicting the performance of numerical optimizations on the BBOB benchmark collection.
arXiv Detail & Related papers (2021-04-22T11:47:47Z) - Consumer Behaviour in Retail: Next Logical Purchase using Deep Neural
Network [0.0]
Accurate prediction of consumer purchase pattern enables better inventory planning and efficient personalized marketing strategies.
Nerve network architectures like Multi Layer Perceptron, Long Short Term Memory (LSTM), Temporal Convolutional Networks (TCN) and TCN-LSTM bring over ML models like Xgboost and RandomForest.
arXiv Detail & Related papers (2020-10-14T11:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.