Model-Free Approximate Bayesian Learning for Large-Scale Conversion
Funnel Optimization
- URL: http://arxiv.org/abs/2401.06710v1
- Date: Fri, 12 Jan 2024 17:19:44 GMT
- Title: Model-Free Approximate Bayesian Learning for Large-Scale Conversion
Funnel Optimization
- Authors: Garud Iyengar and Raghav Singal
- Abstract summary: We study the problem of identifying the optimal sequential personalized interventions that maximize the adoption probability for a new product.
We model consumer behavior by a conversion funnel that captures the state of each consumer.
We propose a novel attribution-based decision-making algorithm for this problem that we call model-free approximate Bayesian learning.
- Score: 10.560764660131891
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The flexibility of choosing the ad action as a function of the consumer state
is critical for modern-day marketing campaigns. We study the problem of
identifying the optimal sequential personalized interventions that maximize the
adoption probability for a new product. We model consumer behavior by a
conversion funnel that captures the state of each consumer (e.g., interaction
history with the firm) and allows the consumer behavior to vary as a function
of both her state and firm's sequential interventions. We show our model
captures consumer behavior with very high accuracy (out-of-sample AUC of over
0.95) in a real-world email marketing dataset. However, it results in a very
large-scale learning problem, where the firm must learn the state-specific
effects of various interventions from consumer interactions. We propose a novel
attribution-based decision-making algorithm for this problem that we call
model-free approximate Bayesian learning. Our algorithm inherits the
interpretability and scalability of Thompson sampling for bandits and maintains
an approximate belief over the value of each state-specific intervention. The
belief is updated as the algorithm interacts with the consumers. Despite being
an approximation to the Bayes update, we prove the asymptotic optimality of our
algorithm and analyze its convergence rate. We show that our algorithm
significantly outperforms traditional approaches on extensive simulations
calibrated to a real-world email marketing dataset.
Related papers
- Modeling the Telemarketing Process using Genetic Algorithms and Extreme
Boosting: Feature Selection and Cost-Sensitive Analytical Approach [0.06906005491572399]
This research aims at leveraging the power of telemarketing data in modeling the willingness of clients to make a term deposit.
Real-world data from a Portuguese bank and national socio-economic metrics are used to model the telemarketing decision-making process.
arXiv Detail & Related papers (2023-10-30T08:46:55Z) - Choice Models and Permutation Invariance: Demand Estimation in
Differentiated Products Markets [5.8429701619765755]
We demonstrate how non-parametric estimators like neural nets can easily approximate choice functions.
Our proposed functionals can flexibly capture underlying consumer behavior in a completely data-driven fashion.
Our empirical analysis confirms that the estimator generates realistic and comparable own- and cross-price elasticities.
arXiv Detail & Related papers (2023-07-13T23:24:05Z) - Improved Bayes Risk Can Yield Reduced Social Welfare Under Competition [99.7047087527422]
In this work, we demonstrate that competition can fundamentally alter the behavior of machine learning scaling trends.
We find many settings where improving data representation quality decreases the overall predictive accuracy across users.
At a conceptual level, our work suggests that favorable scaling trends for individual model-providers need not translate to downstream improvements in social welfare.
arXiv Detail & Related papers (2023-06-26T13:06:34Z) - Unified Embedding Based Personalized Retrieval in Etsy Search [0.206242362470764]
We propose learning a unified embedding model incorporating graph, transformer and term-based embeddings end to end.
Our personalized retrieval model significantly improves the overall search experience, as measured by a 5.58% increase in search purchase rate and a 2.63% increase in site-wide conversion rate.
arXiv Detail & Related papers (2023-06-07T23:24:50Z) - Precision-Recall Divergence Optimization for Generative Modeling with
GANs and Normalizing Flows [54.050498411883495]
We develop a novel training method for generative models, such as Generative Adversarial Networks and Normalizing Flows.
We show that achieving a specified precision-recall trade-off corresponds to minimizing a unique $f$-divergence from a family we call the textitPR-divergences.
Our approach improves the performance of existing state-of-the-art models like BigGAN in terms of either precision or recall when tested on datasets such as ImageNet.
arXiv Detail & Related papers (2023-05-30T10:07:17Z) - Federated Variational Inference: Towards Improved Personalization and
Generalization [2.37589914835055]
We study personalization and generalization in stateless cross-device federated learning setups.
We first propose a hierarchical generative model and formalize it using Bayesian Inference.
We then approximate this process using Variational Inference to train our model efficiently.
We evaluate our model on FEMNIST and CIFAR-100 image classification and show that FedVI beats the state-of-the-art on both tasks.
arXiv Detail & Related papers (2023-05-23T04:28:07Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Meta-Wrapper: Differentiable Wrapping Operator for User Interest
Selection in CTR Prediction [97.99938802797377]
Click-through rate (CTR) prediction, whose goal is to predict the probability of the user to click on an item, has become increasingly significant in recommender systems.
Recent deep learning models with the ability to automatically extract the user interest from his/her behaviors have achieved great success.
We propose a novel approach under the framework of the wrapper method, which is named Meta-Wrapper.
arXiv Detail & Related papers (2022-06-28T03:28:15Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - Consumer Behaviour in Retail: Next Logical Purchase using Deep Neural
Network [0.0]
Accurate prediction of consumer purchase pattern enables better inventory planning and efficient personalized marketing strategies.
Nerve network architectures like Multi Layer Perceptron, Long Short Term Memory (LSTM), Temporal Convolutional Networks (TCN) and TCN-LSTM bring over ML models like Xgboost and RandomForest.
arXiv Detail & Related papers (2020-10-14T11:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.