Efficient Click-Through Rate Prediction for Developing Countries via
Tabular Learning
- URL: http://arxiv.org/abs/2104.07553v1
- Date: Thu, 15 Apr 2021 16:07:25 GMT
- Title: Efficient Click-Through Rate Prediction for Developing Countries via
Tabular Learning
- Authors: Joonyoung Yi, Buru Chang
- Abstract summary: Click-Through Rate (CTR) prediction models are difficult to be deployed due to the limited computing resources.
In this paper, we show that tabular learning models are more efficient and effective in CTR prediction.
- Score: 2.916402752324148
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the rapid growth of online advertisement in developing countries,
existing highly over-parameterized Click-Through Rate (CTR) prediction models
are difficult to be deployed due to the limited computing resources. In this
paper, by bridging the relationship between CTR prediction task and tabular
learning, we present that tabular learning models are more efficient and
effective in CTR prediction than over-parameterized CTR prediction models.
Extensive experiments on eight public CTR prediction datasets show that tabular
learning models outperform twelve state-of-the-art CTR prediction models.
Furthermore, compared to over-parameterized CTR prediction models, tabular
learning models can be fast trained without expensive computing resources
including high-performance GPUs. Finally, through an A/B test on an actual
online application, we show that tabular learning models improve not only
offline performance but also the CTR of real users.
Related papers
- Investigating the Robustness of Counterfactual Learning to Rank Models: A Reproducibility Study [61.64685376882383]
Counterfactual learning to rank (CLTR) has attracted extensive attention in the IR community for its ability to leverage massive logged user interaction data to train ranking models.
This paper investigates the robustness of existing CLTR models in complex and diverse situations.
We find that the DLA models and IPS-DCM show better robustness under various simulation settings than IPS-PBM and PRS with offline propensity estimation.
arXiv Detail & Related papers (2024-04-04T10:54:38Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - DELTA: Dynamic Embedding Learning with Truncated Conscious Attention for
CTR Prediction [61.68415731896613]
Click-Through Rate (CTR) prediction is a pivotal task in product and content recommendation.
We propose a model that enables Dynamic Embedding Learning with Truncated Conscious Attention for CTR prediction.
arXiv Detail & Related papers (2023-05-03T12:34:45Z) - Directed Acyclic Graph Factorization Machines for CTR Prediction via
Knowledge Distillation [65.62538699160085]
We propose a Directed Acyclic Graph Factorization Machine (KD-DAGFM) to learn the high-order feature interactions from existing complex interaction models for CTR prediction via Knowledge Distillation.
KD-DAGFM achieves the best performance with less than 21.5% FLOPs of the state-of-the-art method on both online and offline experiments.
arXiv Detail & Related papers (2022-11-21T03:09:42Z) - Continual Learning for CTR Prediction: A Hybrid Approach [37.668467137218286]
We propose COLF, a hybrid COntinual Learning Framework for CTR prediction.
COLF has a memory-based modular architecture that is designed to adapt, learn and give predictions continuously.
Empirical evaluations on click log collected from a major shopping app in China demonstrate our method's superiority over existing methods.
arXiv Detail & Related papers (2022-01-18T11:30:57Z) - Looking at CTR Prediction Again: Is Attention All You Need? [4.873362301533825]
Click-through rate (CTR) prediction is a critical problem in web search, recommendation systems and online advertisement displaying.
We use the discrete choice model in economics to redefine the CTR prediction problem, and propose a general neural network framework built on self-attention mechanism.
It is found that most existing CTR prediction models align with our proposed general framework.
arXiv Detail & Related papers (2021-05-12T10:27:14Z) - Ensemble Knowledge Distillation for CTR Prediction [46.92149090885551]
We propose a new model training strategy based on knowledge distillation (KD)
KD is a teacher-student learning framework to transfer knowledge learned from a teacher model to a student model.
We propose some novel techniques to facilitate ensembled CTR prediction, including teacher gating and early stopping by distillation loss.
arXiv Detail & Related papers (2020-11-08T23:37:58Z) - BARS-CTR: Open Benchmarking for Click-Through Rate Prediction [30.000261789268063]
Click-through rate (CTR) prediction is a critical task for many applications.
In recent years, CTR prediction has been widely studied in both academia and industry.
There is still a lack of standardized benchmarks and uniform evaluation protocols for CTR prediction research.
arXiv Detail & Related papers (2020-09-12T13:34:22Z) - Iterative Boosting Deep Neural Networks for Predicting Click-Through
Rate [15.90144113403866]
The click-through rate (CTR) reflects the ratio of clicks on a specific item to its total number of views.
XdBoost is an iterative three-stage neural network model influenced by the traditional machine learning boosting mechanism.
arXiv Detail & Related papers (2020-07-26T09:41:16Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.