Replace Scoring with Arrangement: A Contextual Set-to-Arrangement
Framework for Learning-to-Rank
- URL: http://arxiv.org/abs/2308.02860v2
- Date: Fri, 25 Aug 2023 07:59:36 GMT
- Title: Replace Scoring with Arrangement: A Contextual Set-to-Arrangement
Framework for Learning-to-Rank
- Authors: Jiarui Jin, Xianyu Chen, Weinan Zhang, Mengyue Yang, Yang Wang, Yali
Du, Yong Yu, Jun Wang
- Abstract summary: Learning-to-rank is a core technique in the top-N recommendation task, where an ideal ranker would be a mapping from an item set to an arrangement.
Most existing solutions fall in the paradigm of probabilistic ranking principle (PRP), i.e., first score each item in the candidate set and then perform a sort operation to generate the top ranking list.
We propose Set-To-Arrangement Ranking (STARank), a new framework directly generates the permutations of the candidate items without the need for individually scoring and sort operations.
- Score: 40.81502990315285
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Learning-to-rank is a core technique in the top-N recommendation task, where
an ideal ranker would be a mapping from an item set to an arrangement (a.k.a.
permutation). Most existing solutions fall in the paradigm of probabilistic
ranking principle (PRP), i.e., first score each item in the candidate set and
then perform a sort operation to generate the top ranking list. However, these
approaches neglect the contextual dependence among candidate items during
individual scoring, and the sort operation is non-differentiable. To bypass the
above issues, we propose Set-To-Arrangement Ranking (STARank), a new framework
directly generates the permutations of the candidate items without the need for
individually scoring and sort operations; and is end-to-end differentiable. As
a result, STARank can operate when only the ground-truth permutations are
accessible without requiring access to the ground-truth relevance scores for
items. For this purpose, STARank first reads the candidate items in the context
of the user browsing history, whose representations are fed into a
Plackett-Luce module to arrange the given items into a list. To effectively
utilize the given ground-truth permutations for supervising STARank, we
leverage the internal consistency property of Plackett-Luce models to derive a
computationally efficient list-wise loss. Experimental comparisons against 9
the state-of-the-art methods on 2 learning-to-rank benchmark datasets and 3
top-N real-world recommendation datasets demonstrate the superiority of STARank
in terms of conventional ranking metrics. Notice that these ranking metrics do
not consider the effects of the contextual dependence among the items in the
list, we design a new family of simulation-based ranking metrics, where
existing metrics can be regarded as special cases. STARank can consistently
achieve better performance in terms of PBM and UBM simulation-based metrics.
Related papers
- Self-Calibrated Listwise Reranking with Large Language Models [137.6557607279876]
Large language models (LLMs) have been employed in reranking tasks through a sequence-to-sequence approach.
This reranking paradigm requires a sliding window strategy to iteratively handle larger candidate sets.
We propose a novel self-calibrated listwise reranking method, which aims to leverage LLMs to produce global relevance scores for ranking.
arXiv Detail & Related papers (2024-11-07T10:31:31Z) - SortNet: Learning To Rank By a Neural-Based Sorting Algorithm [5.485151775727742]
We present SortNet, an adaptive ranking algorithm which orders objects using a neural network as a comparator.
The proposed algorithm is evaluated on the LETOR dataset showing promising performances in comparison with other state of the art algorithms.
arXiv Detail & Related papers (2023-11-03T12:14:26Z) - Zero-Shot Listwise Document Reranking with a Large Language Model [58.64141622176841]
We propose Listwise Reranker with a Large Language Model (LRL), which achieves strong reranking effectiveness without using any task-specific training data.
Experiments on three TREC web search datasets demonstrate that LRL not only outperforms zero-shot pointwise methods when reranking first-stage retrieval results, but can also act as a final-stage reranker.
arXiv Detail & Related papers (2023-05-03T14:45:34Z) - PIER: Permutation-Level Interest-Based End-to-End Re-ranking Framework
in E-commerce [13.885695433738437]
Existing re-ranking methods directly take the initial ranking list as input, and generate the optimal permutation through a well-designed context-wise model.
evaluating all candidate permutations brings unacceptable computational costs in practice.
This paper presents a novel end-to-end re-ranking framework named PIER to tackle the above challenges.
arXiv Detail & Related papers (2023-02-06T09:17:52Z) - Learning List-Level Domain-Invariant Representations for Ranking [59.3544317373004]
We propose list-level alignment -- learning domain-invariant representations at the higher level of lists.
The benefits are twofold: it leads to the first domain adaptation generalization bound for ranking, in turn providing theoretical support for the proposed method.
arXiv Detail & Related papers (2022-12-21T04:49:55Z) - PiRank: Learning To Rank via Differentiable Sorting [85.28916333414145]
We propose PiRank, a new class of differentiable surrogates for ranking.
We show that PiRank exactly recovers the desired metrics in the limit of zero temperature.
arXiv Detail & Related papers (2020-12-12T05:07:36Z) - SetRank: A Setwise Bayesian Approach for Collaborative Ranking from
Implicit Feedback [50.13745601531148]
We propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to accommodate the characteristics of implicit feedback in recommender system.
Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons.
We also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $sqrtM/N$.
arXiv Detail & Related papers (2020-02-23T06:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.