COPR: Consistency-Oriented Pre-Ranking for Online Advertising
- URL: http://arxiv.org/abs/2306.03516v1
- Date: Tue, 6 Jun 2023 09:08:40 GMT
- Title: COPR: Consistency-Oriented Pre-Ranking for Online Advertising
- Authors: Zhishan Zhao, Jingyue Gao, Yu Zhang, Shuguang Han, Siyuan Lou,
Xiang-Rong Sheng, Zhe Wang, Han Zhu, Yuning Jiang, Jian Xu, Bo Zheng
- Abstract summary: We introduce a consistency-oriented pre-ranking framework for online advertising.
It employs a chunk-based sampling module and a plug-and-play rank alignment module to explicitly optimize consistency of ECPM-ranked results.
When deployed in Taobao display advertising system, it achieves an improvement of up to +12.3% CTR and +5.6% RPM.
- Score: 27.28920707332434
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cascading architecture has been widely adopted in large-scale advertising
systems to balance efficiency and effectiveness. In this architecture, the
pre-ranking model is expected to be a lightweight approximation of the ranking
model, which handles more candidates with strict latency requirements. Due to
the gap in model capacity, the pre-ranking and ranking models usually generate
inconsistent ranked results, thus hurting the overall system effectiveness. The
paradigm of score alignment is proposed to regularize their raw scores to be
consistent. However, it suffers from inevitable alignment errors and error
amplification by bids when applied in online advertising. To this end, we
introduce a consistency-oriented pre-ranking framework for online advertising,
which employs a chunk-based sampling module and a plug-and-play rank alignment
module to explicitly optimize consistency of ECPM-ranked results. A $\Delta
NDCG$-based weighting mechanism is adopted to better distinguish the importance
of inter-chunk samples in optimization. Both online and offline experiments
have validated the superiority of our framework. When deployed in Taobao
display advertising system, it achieves an improvement of up to +12.3\% CTR and
+5.6\% RPM.
Related papers
- Optimizing E-commerce Search: Toward a Generalizable and Rank-Consistent Pre-Ranking Model [13.573766789458118]
In large e-commerce platforms, the pre-ranking phase is crucial for filtering out the bulk of products in advance for the downstream ranking module.
We propose a novel method: a Generalizable and RAnk-ConsistEnt Pre-Ranking Model (GRACE), which achieves: 1) Ranking consistency by introducing multiple binary classification tasks that predict whether a product is within the top-k results as estimated by the ranking model, which facilitates the addition of learning objectives on common point-wise ranking models; 2) Generalizability through contrastive learning of representation for all products by pre-training on a subset of ranking product embeddings
arXiv Detail & Related papers (2024-05-09T07:55:52Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - There is No Silver Bullet: Benchmarking Methods in Predictive Combinatorial Optimization [59.27851754647913]
Predictive optimization is the precise modeling of many real-world applications, including energy cost-aware scheduling and budget allocation on advertising.
There is no systematic benchmark of both approaches, including the specific design choices at the module level.
Our study shows that PnO approaches are better than PtO on 7 out of 8 benchmarks, but there is no silver bullet found for the specific design choices of PnO.
arXiv Detail & Related papers (2023-11-13T13:19:34Z) - Rethinking Large-scale Pre-ranking System: Entire-chain Cross-domain
Models [0.0]
Existing pre-ranking approaches mainly endure sample selection bias problem owing to ignoring the entire-chain data dependence.
We propose Entire-chain Cross-domain Models (ECM), which leverage samples from the whole cascaded stages to effectively alleviate SSB problem.
We also propose a fine-grained neural structure named ECMM to further improve the pre-ranking accuracy.
arXiv Detail & Related papers (2023-10-12T05:14:42Z) - Model-based Causal Bayesian Optimization [74.78486244786083]
We introduce the first algorithm for Causal Bayesian Optimization with Multiplicative Weights (CBO-MW)
We derive regret bounds for CBO-MW that naturally depend on graph-related quantities.
Our experiments include a realistic demonstration of how CBO-MW can be used to learn users' demand patterns in a shared mobility system.
arXiv Detail & Related papers (2023-07-31T13:02:36Z) - Confidence Ranking for CTR Prediction [11.071444869776725]
We propose a novel framework, named Confidence Ranking, which designs the optimization objective as a ranking function.
Our experiments show that the introduction of confidence ranking loss can outperform all baselines on the CTR prediction tasks of public and industrial datasets.
This framework has been deployed in the advertisement system of JD.com to serve the main traffic in the fine-rank stage.
arXiv Detail & Related papers (2023-06-28T07:31:00Z) - Precision-Recall Divergence Optimization for Generative Modeling with
GANs and Normalizing Flows [54.050498411883495]
We develop a novel training method for generative models, such as Generative Adversarial Networks and Normalizing Flows.
We show that achieving a specified precision-recall trade-off corresponds to minimizing a unique $f$-divergence from a family we call the textitPR-divergences.
Our approach improves the performance of existing state-of-the-art models like BigGAN in terms of either precision or recall when tested on datasets such as ImageNet.
arXiv Detail & Related papers (2023-05-30T10:07:17Z) - SwiftPruner: Reinforced Evolutionary Pruning for Efficient Ad Relevance [19.930169700686672]
This work aims to design a new, low-latency BERT via structured pruning to empower real-time online inference for cold start ads relevance on a CPU platform.
In this paper, we propose SwiftPruner - an efficient framework that leverages evolution-based search to automatically find the best-performing layer-wise sparse BERT model.
arXiv Detail & Related papers (2022-08-30T03:05:56Z) - Learning-To-Ensemble by Contextual Rank Aggregation in E-Commerce [8.067201256886733]
We propose a new Learning-To-Ensemble framework RAEGO, which replaces the ensemble model with a contextual Rank Aggregator.
RA-EGO has been deployed in our online system and has improved the revenue significantly.
arXiv Detail & Related papers (2021-07-19T03:24:06Z) - Higher Performance Visual Tracking with Dual-Modal Localization [106.91097443275035]
Visual Object Tracking (VOT) has synchronous needs for both robustness and accuracy.
We propose a dual-modal framework for target localization, consisting of robust localization suppressingors via ONR and the accurate localization attending to the target center precisely via OFC.
arXiv Detail & Related papers (2021-03-18T08:47:56Z) - CRACT: Cascaded Regression-Align-Classification for Robust Visual
Tracking [97.84109669027225]
We introduce an improved proposal refinement module, Cascaded Regression-Align- Classification (CRAC)
CRAC yields new state-of-the-art performances on many benchmarks.
In experiments on seven benchmarks including OTB-2015, UAV123, NfS, VOT-2018, TrackingNet, GOT-10k and LaSOT, our CRACT exhibits very promising results in comparison with state-of-the-art competitors.
arXiv Detail & Related papers (2020-11-25T02:18:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.