Optimizing E-commerce Search: Toward a Generalizable and Rank-Consistent Pre-Ranking Model
- URL: http://arxiv.org/abs/2405.05606v3
- Date: Wed, 21 Aug 2024 06:20:34 GMT
- Title: Optimizing E-commerce Search: Toward a Generalizable and Rank-Consistent Pre-Ranking Model
- Authors: Enqiang Xu, Yiming Qiu, Junyang Bai, Ping Zhang, Dadong Miao, Songlin Wang, Guoyu Tang, Lin Liu, Mingming Li,
- Abstract summary: In large e-commerce platforms, the pre-ranking phase is crucial for filtering out the bulk of products in advance for the downstream ranking module.
We propose a novel method: a Generalizable and RAnk-ConsistEnt Pre-Ranking Model (GRACE), which achieves: 1) Ranking consistency by introducing multiple binary classification tasks that predict whether a product is within the top-k results as estimated by the ranking model, which facilitates the addition of learning objectives on common point-wise ranking models; 2) Generalizability through contrastive learning of representation for all products by pre-training on a subset of ranking product embeddings
- Score: 13.573766789458118
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In large e-commerce platforms, search systems are typically composed of a series of modules, including recall, pre-ranking, and ranking phases. The pre-ranking phase, serving as a lightweight module, is crucial for filtering out the bulk of products in advance for the downstream ranking module. Industrial efforts on optimizing the pre-ranking model have predominantly focused on enhancing ranking consistency, model structure, and generalization towards long-tail items. Beyond these optimizations, meeting the system performance requirements presents a significant challenge. Contrasting with existing industry works, we propose a novel method: a Generalizable and RAnk-ConsistEnt Pre-Ranking Model (GRACE), which achieves: 1) Ranking consistency by introducing multiple binary classification tasks that predict whether a product is within the top-k results as estimated by the ranking model, which facilitates the addition of learning objectives on common point-wise ranking models; 2) Generalizability through contrastive learning of representation for all products by pre-training on a subset of ranking product embeddings; 3) Ease of implementation in feature construction and online deployment. Our extensive experiments demonstrate significant improvements in both offline metrics and online A/B test: a 0.75% increase in AUC and a 1.28% increase in CVR.
Related papers
- Towards More Relevant Product Search Ranking Via Large Language Models: An Empirical Study [14.826942979030356]
Large Language Models (LLMs) are used for both label and feature generation in model training.
We introduce different sigmoid transformations on the LLM outputs to polarize relevance scores in labeling.
Our work sheds light on advanced strategies for integrating LLMs into e-commerce product search ranking model training.
arXiv Detail & Related papers (2024-09-26T01:38:05Z) - Generative Pre-trained Ranking Model with Over-parameterization at Web-Scale (Extended Abstract) [73.57710917145212]
Learning to rank is widely employed in web searches to prioritize pertinent webpages based on input queries.
We propose a emphulineGenerative ulineSemi-ulineSupervised ulinePre-trained (GS2P) model to address these challenges.
We conduct extensive offline experiments on both a publicly available dataset and a real-world dataset collected from a large-scale search engine.
arXiv Detail & Related papers (2024-09-25T03:39:14Z) - RankTower: A Synergistic Framework for Enhancing Two-Tower Pre-Ranking Model [0.0]
In large-scale ranking systems, cascading architectures have been widely adopted to achieve a balance between efficiency and effectiveness.
It is crucial for the pre-ranking model to maintain a balance between efficiency and accuracy to adhere to online latency constraints.
We propose a novel neural network architecture called RankTower, which is designed to efficiently capture user-item interactions.
arXiv Detail & Related papers (2024-07-17T08:07:37Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Benchmarking PtO and PnO Methods in the Predictive Combinatorial Optimization Regime [59.27851754647913]
Predictive optimization is the precise modeling of many real-world applications, including energy cost-aware scheduling and budget allocation on advertising.
We develop a modular framework to benchmark 11 existing PtO/PnO methods on 8 problems, including a new industrial dataset for advertising.
Our study shows that PnO approaches are better than PtO on 7 out of 8 benchmarks, but there is no silver bullet found for the specific design choices of PnO.
arXiv Detail & Related papers (2023-11-13T13:19:34Z) - COPR: Consistency-Oriented Pre-Ranking for Online Advertising [27.28920707332434]
We introduce a consistency-oriented pre-ranking framework for online advertising.
It employs a chunk-based sampling module and a plug-and-play rank alignment module to explicitly optimize consistency of ECPM-ranked results.
When deployed in Taobao display advertising system, it achieves an improvement of up to +12.3% CTR and +5.6% RPM.
arXiv Detail & Related papers (2023-06-06T09:08:40Z) - PEAR: Personalized Re-ranking with Contextualized Transformer for
Recommendation [48.17295872384401]
We present a personalized re-ranking model (dubbed PEAR) based on contextualized transformer.
PEAR makes several major improvements over the existing methods.
We also augment the training of PEAR with a list-level classification task to assess users' satisfaction on the whole ranking list.
arXiv Detail & Related papers (2022-03-23T08:29:46Z) - Learning-To-Ensemble by Contextual Rank Aggregation in E-Commerce [8.067201256886733]
We propose a new Learning-To-Ensemble framework RAEGO, which replaces the ensemble model with a contextual Rank Aggregator.
RA-EGO has been deployed in our online system and has improved the revenue significantly.
arXiv Detail & Related papers (2021-07-19T03:24:06Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.