Confidence Ranking for CTR Prediction
- URL: http://arxiv.org/abs/2307.01206v1
- Date: Wed, 28 Jun 2023 07:31:00 GMT
- Title: Confidence Ranking for CTR Prediction
- Authors: Jian Zhu, Congcong Liu, Pei Wang, Xiwei Zhao, Zhangang Lin, Jingping
Shao
- Abstract summary: We propose a novel framework, named Confidence Ranking, which designs the optimization objective as a ranking function.
Our experiments show that the introduction of confidence ranking loss can outperform all baselines on the CTR prediction tasks of public and industrial datasets.
This framework has been deployed in the advertisement system of JD.com to serve the main traffic in the fine-rank stage.
- Score: 11.071444869776725
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Model evolution and constant availability of data are two common phenomena in
large-scale real-world machine learning applications, e.g. ads and
recommendation systems. To adapt, the real-world system typically retrain with
all available data and online learn with recently available data to update the
models periodically with the goal of better serving performance. In this paper,
we propose a novel framework, named Confidence Ranking, which designs the
optimization objective as a ranking function with two different models. Our
confidence ranking loss allows direct optimization of the logits output for
different convex surrogate functions of metrics, e.g. AUC and Accuracy
depending on the target task and dataset. Armed with our proposed methods, our
experiments show that the introduction of confidence ranking loss can
outperform all baselines on the CTR prediction tasks of public and industrial
datasets. This framework has been deployed in the advertisement system of
JD.com to serve the main traffic in the fine-rank stage.
Related papers
- A Unified Knowledge-Distillation and Semi-Supervised Learning Framework to Improve Industrial Ads Delivery Systems [19.0143243243314]
Industrial ads ranking systems conventionally rely on labeled impression data, which leads to challenges such as overfitting, slower incremental gain from model scaling, and biases due to discrepancies between training and serving data.
We propose a Unified framework for Knowledge-Distillation and Semi-supervised Learning (UK) for ads ranking, empowering the training of models on a significantly larger and more diverse datasets.
arXiv Detail & Related papers (2025-02-05T23:14:07Z) - The Efficiency vs. Accuracy Trade-off: Optimizing RAG-Enhanced LLM Recommender Systems Using Multi-Head Early Exit [46.37267466656765]
This paper presents an optimization framework that combines Retrieval-Augmented Generation (RAG) with an innovative multi-head early exit architecture.
Our experiments demonstrate how this architecture effectively decreases time without sacrificing the accuracy needed for reliable recommendation delivery.
arXiv Detail & Related papers (2025-01-04T03:26:46Z) - Tackling Data Heterogeneity in Federated Time Series Forecasting [61.021413959988216]
Time series forecasting plays a critical role in various real-world applications, including energy consumption prediction, disease transmission monitoring, and weather forecasting.
Most existing methods rely on a centralized training paradigm, where large amounts of data are collected from distributed devices to a central cloud server.
We propose a novel framework, Fed-TREND, to address data heterogeneity by generating informative synthetic data as auxiliary knowledge carriers.
arXiv Detail & Related papers (2024-11-24T04:56:45Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - COPR: Consistency-Oriented Pre-Ranking for Online Advertising [27.28920707332434]
We introduce a consistency-oriented pre-ranking framework for online advertising.
It employs a chunk-based sampling module and a plug-and-play rank alignment module to explicitly optimize consistency of ECPM-ranked results.
When deployed in Taobao display advertising system, it achieves an improvement of up to +12.3% CTR and +5.6% RPM.
arXiv Detail & Related papers (2023-06-06T09:08:40Z) - A Tale of Two Cities: Data and Configuration Variances in Robust Deep
Learning [27.498927971861068]
Deep neural networks (DNNs) are widely used in many industries such as image recognition, supply chain, medical diagnosis, and autonomous driving.
Prior work has shown the high accuracy of a DNN model does not imply high robustness because the input data and external environment are constantly changing.
arXiv Detail & Related papers (2022-11-18T03:32:53Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.