Unleash the Power of Context: Enhancing Large-Scale Recommender Systems
with Context-Based Prediction Models
- URL: http://arxiv.org/abs/2308.01231v1
- Date: Tue, 25 Jul 2023 07:57:12 GMT
- Title: Unleash the Power of Context: Enhancing Large-Scale Recommender Systems
with Context-Based Prediction Models
- Authors: Jan Hartman, Assaf Klein, Davorin Kopi\v{c}, Natalia Silberstein
- Abstract summary: A Context-Based Prediction Model determines the probability of a user's action solely by relying on user and contextual features.
We have identified numerous valuable applications for this modeling approach, including training an auxiliary context-based model to estimate click probability.
- Score: 2.3267858167388775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we introduce the notion of Context-Based Prediction Models. A
Context-Based Prediction Model determines the probability of a user's action
(such as a click or a conversion) solely by relying on user and contextual
features, without considering any specific features of the item itself. We have
identified numerous valuable applications for this modeling approach, including
training an auxiliary context-based model to estimate click probability and
incorporating its prediction as a feature in CTR prediction models. Our
experiments indicate that this enhancement brings significant improvements in
offline and online business metrics while having minimal impact on the cost of
serving. Overall, our work offers a simple and scalable, yet powerful approach
for enhancing the performance of large-scale commercial recommender systems,
with broad implications for the field of personalized recommendations.
Related papers
- Optimizing Sequential Recommendation Models with Scaling Laws and Approximate Entropy [104.48511402784763]
Performance Law for SR models aims to theoretically investigate and model the relationship between model performance and data quality.
We propose Approximate Entropy (ApEn) to assess data quality, presenting a more nuanced approach compared to traditional data quantity metrics.
arXiv Detail & Related papers (2024-11-30T10:56:30Z) - Customer Lifetime Value Prediction with Uncertainty Estimation Using Monte Carlo Dropout [3.187236205541292]
We propose a novel approach that enhances the architecture of purely neural network models by incorporating the Monte Carlo Dropout (MCD) framework.
We benchmarked the proposed method using data from one of the most downloaded mobile games in the world.
Our approach provides confidence metric as an extra dimension for performance evaluation across various neural network models.
arXiv Detail & Related papers (2024-11-24T18:14:44Z) - A Collaborative Ensemble Framework for CTR Prediction [73.59868761656317]
We propose a novel framework, Collaborative Ensemble Training Network (CETNet), to leverage multiple distinct models.
Unlike naive model scaling, our approach emphasizes diversity and collaboration through collaborative learning.
We validate our framework on three public datasets and a large-scale industrial dataset from Meta.
arXiv Detail & Related papers (2024-11-20T20:38:56Z) - A Utility-Mining-Driven Active Learning Approach for Analyzing Clickstream Sequences [21.38368444137596]
This study introduces the High-Utility Sequential Pattern Mining using SHAP values (HUSPM-SHAP) model.
Our findings demonstrate the model's capability to refine e-commerce data processing, steering towards more streamlined, cost-effective prediction modeling.
arXiv Detail & Related papers (2024-10-09T10:44:02Z) - Prediction-Oriented Bayesian Active Learning [51.426960808684655]
Expected predictive information gain (EPIG) is an acquisition function that measures information gain in the space of predictions rather than parameters.
EPIG leads to stronger predictive performance compared with BALD across a range of datasets and models.
arXiv Detail & Related papers (2023-04-17T10:59:57Z) - Prototypical Fine-tuning: Towards Robust Performance Under Varying Data
Sizes [47.880781811936345]
We propose a novel framework for fine-tuning pretrained language models (LM)
Our prototypical fine-tuning approach can automatically adjust the model capacity according to the number of data points and the model's inherent attributes.
arXiv Detail & Related papers (2022-11-24T14:38:08Z) - Off-policy evaluation for learning-to-rank via interpolating the
item-position model and the position-based model [83.83064559894989]
A critical need for industrial recommender systems is the ability to evaluate recommendation policies offline, before deploying them to production.
We develop a new estimator that mitigates the problems of the two most popular off-policy estimators for rankings.
In particular, the new estimator, called INTERPOL, addresses the bias of a potentially misspecified position-based model.
arXiv Detail & Related papers (2022-10-15T17:22:30Z) - Preference Enhanced Social Influence Modeling for Network-Aware Cascade
Prediction [59.221668173521884]
We propose a novel framework to promote cascade size prediction by enhancing the user preference modeling.
Our end-to-end method makes the user activating process of information diffusion more adaptive and accurate.
arXiv Detail & Related papers (2022-04-18T09:25:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.