Learning-To-Ensemble by Contextual Rank Aggregation in E-Commerce
- URL: http://arxiv.org/abs/2107.08598v1
- Date: Mon, 19 Jul 2021 03:24:06 GMT
- Title: Learning-To-Ensemble by Contextual Rank Aggregation in E-Commerce
- Authors: Xuesi Wang, Guangda Huzhang, Qianying Lin, Qing Da, Dan Shen
- Abstract summary: We propose a new Learning-To-Ensemble framework RAEGO, which replaces the ensemble model with a contextual Rank Aggregator.
RA-EGO has been deployed in our online system and has improved the revenue significantly.
- Score: 8.067201256886733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensemble models in E-commerce combine predictions from multiple sub-models
for ranking and revenue improvement. Industrial ensemble models are typically
deep neural networks, following the supervised learning paradigm to infer
conversion rate given inputs from sub-models. However, this process has the
following two problems. Firstly, the point-wise scoring approach disregards the
relationships between items and leads to homogeneous displayed results, while
diversified display benefits user experience and revenue. Secondly, the
learning paradigm focuses on the ranking metrics and does not directly optimize
the revenue. In our work, we propose a new Learning-To-Ensemble (LTE) framework
RAEGO, which replaces the ensemble model with a contextual Rank Aggregator (RA)
and explores the best weights of sub-models by the Evaluator-Generator
Optimization (EGO). To achieve the best online performance, we propose a new
rank aggregation algorithm TournamentGreedy as a refinement of classic rank
aggregators, which also produces the best average weighted Kendall Tau Distance
(KTD) amongst all the considered algorithms with quadratic time complexity.
Under the assumption that the best output list should be Pareto Optimal on the
KTD metric for sub-models, we show that our RA algorithm has higher efficiency
and coverage in exploring the optimal weights. Combined with the idea of
Bayesian Optimization and gradient descent, we solve the online contextual
Black-Box Optimization task that finds the optimal weights for sub-models given
a chosen RA model. RA-EGO has been deployed in our online system and has
improved the revenue significantly.
Related papers
- A Collaborative Ensemble Framework for CTR Prediction [73.59868761656317]
We propose a novel framework, Collaborative Ensemble Training Network (CETNet), to leverage multiple distinct models.
Unlike naive model scaling, our approach emphasizes diversity and collaboration through collaborative learning.
We validate our framework on three public datasets and a large-scale industrial dataset from Meta.
arXiv Detail & Related papers (2024-11-20T20:38:56Z) - TSPRank: Bridging Pairwise and Listwise Methods with a Bilinear Travelling Salesman Model [19.7255072094322]
Travelling Salesman Problem Rank (TSPRank) is a hybrid pairwise-listwise ranking method.
TSPRank's robustness and superior performance across different domains highlight its potential as a versatile and effective LETOR solution.
arXiv Detail & Related papers (2024-11-18T21:10:14Z) - LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning [56.273799410256075]
The framework combines Monte Carlo Tree Search (MCTS) with iterative Self-Refine to optimize the reasoning path.
The framework has been tested on general and advanced benchmarks, showing superior performance in terms of search efficiency and problem-solving capability.
arXiv Detail & Related papers (2024-10-03T18:12:29Z) - Preference Learning Algorithms Do Not Learn Preference Rankings [62.335733662381884]
We study the conventional wisdom that preference learning trains models to assign higher likelihoods to more preferred outputs than less preferred outputs.
We find that most state-of-the-art preference-tuned models achieve a ranking accuracy of less than 60% on common preference datasets.
arXiv Detail & Related papers (2024-05-29T21:29:44Z) - Optimizing E-commerce Search: Toward a Generalizable and Rank-Consistent Pre-Ranking Model [13.573766789458118]
In large e-commerce platforms, the pre-ranking phase is crucial for filtering out the bulk of products in advance for the downstream ranking module.
We propose a novel method: a Generalizable and RAnk-ConsistEnt Pre-Ranking Model (GRACE), which achieves: 1) Ranking consistency by introducing multiple binary classification tasks that predict whether a product is within the top-k results as estimated by the ranking model, which facilitates the addition of learning objectives on common point-wise ranking models; 2) Generalizability through contrastive learning of representation for all products by pre-training on a subset of ranking product embeddings
arXiv Detail & Related papers (2024-05-09T07:55:52Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Adaptive Neural Ranking Framework: Toward Maximized Business Goal for
Cascade Ranking Systems [33.46891569350896]
Cascade ranking is widely used for large-scale top-k selection problems in online advertising and recommendation systems.
Previous works on learning-to-rank usually focus on letting the model learn the complete order or top-k order.
We name this method as Adaptive Neural Ranking Framework (abbreviated as ARF)
arXiv Detail & Related papers (2023-10-16T14:43:02Z) - Deep Negative Correlation Classification [82.45045814842595]
Existing deep ensemble methods naively train many different models and then aggregate their predictions.
We propose deep negative correlation classification (DNCC)
DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated.
arXiv Detail & Related papers (2022-12-14T07:35:20Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Adaptive Optimizers with Sparse Group Lasso for Neural Networks in CTR
Prediction [19.71671771503269]
We develop a novel framework that adds regularizers of the sparse group lasso to a family of adaptives in deep learning.
We establish proven convergence guarantees in the theoretically convex settings.
Our methods can achieve extremely high sparsity with significantly better or highly competitive performance.
arXiv Detail & Related papers (2021-07-30T05:33:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.