Continual Learning for CTR Prediction: A Hybrid Approach
- URL: http://arxiv.org/abs/2201.06886v1
- Date: Tue, 18 Jan 2022 11:30:57 GMT
- Title: Continual Learning for CTR Prediction: A Hybrid Approach
- Authors: Ke Hu, Yi Qi, Jianqiang Huang, Jia Cheng, Jun Lei
- Abstract summary: We propose COLF, a hybrid COntinual Learning Framework for CTR prediction.
COLF has a memory-based modular architecture that is designed to adapt, learn and give predictions continuously.
Empirical evaluations on click log collected from a major shopping app in China demonstrate our method's superiority over existing methods.
- Score: 37.668467137218286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Click-through rate(CTR) prediction is a core task in cost-per-click(CPC)
advertising systems and has been studied extensively by machine learning
practitioners. While many existing methods have been successfully deployed in
practice, most of them are built upon i.i.d.(independent and identically
distributed) assumption, ignoring that the click data used for training and
inference is collected through time and is intrinsically non-stationary and
drifting. This mismatch will inevitably lead to sub-optimal performance. To
address this problem, we formulate CTR prediction as a continual learning task
and propose COLF, a hybrid COntinual Learning Framework for CTR prediction,
which has a memory-based modular architecture that is designed to adapt, learn
and give predictions continuously when faced with non-stationary drifting click
data streams. Married with a memory population method that explicitly controls
the discrepancy between memory and target data, COLF is able to gain positive
knowledge from its historical experience and makes improved CTR predictions.
Empirical evaluations on click log collected from a major shopping app in China
demonstrate our method's superiority over existing methods. Additionally, we
have deployed our method online and observed significant CTR and revenue
improvement, which further demonstrates our method's efficacy.
Related papers
- Enhancing CTR Prediction through Sequential Recommendation Pre-training: Introducing the SRP4CTR Framework [13.574487867743773]
We propose a Sequential Recommendation Pre-training framework for Click-Through Rate (CTR) prediction (SRP4CTR)
We discuss the impact of introducing pre-trained models on inference costs. Subsequently, we introduce a pre-trained method to encode sequence side information concurrently.
We develop a querying transformer technique to facilitate the knowledge transfer from the pre-trained model to industrial CTR models.
arXiv Detail & Related papers (2024-07-29T02:49:11Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - MAP: A Model-agnostic Pretraining Framework for Click-through Rate
Prediction [39.48740397029264]
We propose a Model-agnostic pretraining (MAP) framework that applies feature corruption and recovery on multi-field categorical data.
We derive two practical algorithms: masked feature prediction (RFD) and replaced feature detection (RFD)
arXiv Detail & Related papers (2023-08-03T12:55:55Z) - Unified Off-Policy Learning to Rank: a Reinforcement Learning
Perspective [61.4025671743675]
Off-policy learning to rank methods often make strong assumptions about how users generate the click data.
We show that offline reinforcement learning can adapt to various click models without complex debiasing techniques and prior knowledge of the model.
Results on various large-scale datasets demonstrate that CUOLR consistently outperforms the state-of-the-art off-policy learning to rank algorithms.
arXiv Detail & Related papers (2023-06-13T03:46:22Z) - DELTA: Dynamic Embedding Learning with Truncated Conscious Attention for
CTR Prediction [61.68415731896613]
Click-Through Rate (CTR) prediction is a pivotal task in product and content recommendation.
We propose a model that enables Dynamic Embedding Learning with Truncated Conscious Attention for CTR prediction.
arXiv Detail & Related papers (2023-05-03T12:34:45Z) - Always Strengthen Your Strengths: A Drift-Aware Incremental Learning
Framework for CTR Prediction [4.909628097144909]
Click-through rate (CTR) prediction is of great importance in recommendation systems and online advertising platforms.
Streaming data has the characteristic that the underlying distribution drifts over time and may recur.
We design a novel drift-aware incremental learning framework based on ensemble learning to address catastrophic forgetting in CTR prediction.
arXiv Detail & Related papers (2023-04-17T05:45:18Z) - Real-Time Evaluation in Online Continual Learning: A New Hope [104.53052316526546]
We evaluate current Continual Learning (CL) methods with respect to their computational costs.
A simple baseline outperforms state-of-the-art CL methods under this evaluation.
This surprisingly suggests that the majority of existing CL literature is tailored to a specific class of streams that is not practical.
arXiv Detail & Related papers (2023-02-02T12:21:10Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Rethinking Position Bias Modeling with Knowledge Distillation for CTR
Prediction [8.414183573280779]
This work proposes a knowledge distillation framework to alleviate the impact of position bias and leverage position information to improve CTR prediction.
The proposed method has been deployed in the real world online ads systems, serving main traffic on one of the world's largest e-commercial platforms.
arXiv Detail & Related papers (2022-04-01T07:58:38Z) - Efficient Click-Through Rate Prediction for Developing Countries via
Tabular Learning [2.916402752324148]
Click-Through Rate (CTR) prediction models are difficult to be deployed due to the limited computing resources.
In this paper, we show that tabular learning models are more efficient and effective in CTR prediction.
arXiv Detail & Related papers (2021-04-15T16:07:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.