PEAR: Personalized Re-ranking with Contextualized Transformer for
Recommendation
- URL: http://arxiv.org/abs/2203.12267v1
- Date: Wed, 23 Mar 2022 08:29:46 GMT
- Title: PEAR: Personalized Re-ranking with Contextualized Transformer for
Recommendation
- Authors: Yi Li, Jieming Zhu, Weiwen Liu, Liangcai Su, Guohao Cai, Qi Zhang,
Ruiming Tang, Xi Xiao, Xiuqiang He
- Abstract summary: We present a personalized re-ranking model (dubbed PEAR) based on contextualized transformer.
PEAR makes several major improvements over the existing methods.
We also augment the training of PEAR with a list-level classification task to assess users' satisfaction on the whole ranking list.
- Score: 48.17295872384401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of recommender systems is to provide ordered item lists to users
that best match their interests. As a critical task in the recommendation
pipeline, re-ranking has received increasing attention in recent years. In
contrast to conventional ranking models that score each item individually,
re-ranking aims to explicitly model the mutual influences among items to
further refine the ordering of items given an initial ranking list. In this
paper, we present a personalized re-ranking model (dubbed PEAR) based on
contextualized transformer. PEAR makes several major improvements over the
existing methods. Specifically, PEAR not only captures feature-level and
item-level interactions, but also models item contexts from both the initial
ranking list and the historical clicked item list. In addition to item-level
ranking score prediction, we also augment the training of PEAR with a
list-level classification task to assess users' satisfaction on the whole
ranking list. Experimental results on both public and production datasets have
shown the superior effectiveness of PEAR compared to the previous re-ranking
models.
Related papers
- Optimizing E-commerce Search: Toward a Generalizable and Rank-Consistent Pre-Ranking Model [13.573766789458118]
In large e-commerce platforms, the pre-ranking phase is crucial for filtering out the bulk of products in advance for the downstream ranking module.
We propose a novel method: a Generalizable and RAnk-ConsistEnt Pre-Ranking Model (GRACE), which achieves: 1) Ranking consistency by introducing multiple binary classification tasks that predict whether a product is within the top-k results as estimated by the ranking model, which facilitates the addition of learning objectives on common point-wise ranking models; 2) Generalizability through contrastive learning of representation for all products by pre-training on a subset of ranking product embeddings
arXiv Detail & Related papers (2024-05-09T07:55:52Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Replace Scoring with Arrangement: A Contextual Set-to-Arrangement
Framework for Learning-to-Rank [40.81502990315285]
Learning-to-rank is a core technique in the top-N recommendation task, where an ideal ranker would be a mapping from an item set to an arrangement.
Most existing solutions fall in the paradigm of probabilistic ranking principle (PRP), i.e., first score each item in the candidate set and then perform a sort operation to generate the top ranking list.
We propose Set-To-Arrangement Ranking (STARank), a new framework directly generates the permutations of the candidate items without the need for individually scoring and sort operations.
arXiv Detail & Related papers (2023-08-05T12:22:26Z) - RankFormer: Listwise Learning-to-Rank Using Listwide Labels [2.9005223064604078]
We propose the RankFormer as an architecture that can jointly optimize a novel listwide assessment objective and a traditional listwise objective.
We conduct experiments in e-commerce on Amazon Search data and find the RankFormer to be superior to all baselines offline.
arXiv Detail & Related papers (2023-06-09T10:47:06Z) - PIER: Permutation-Level Interest-Based End-to-End Re-ranking Framework
in E-commerce [13.885695433738437]
Existing re-ranking methods directly take the initial ranking list as input, and generate the optimal permutation through a well-designed context-wise model.
evaluating all candidate permutations brings unacceptable computational costs in practice.
This paper presents a novel end-to-end re-ranking framework named PIER to tackle the above challenges.
arXiv Detail & Related papers (2023-02-06T09:17:52Z) - Recommendation Systems with Distribution-Free Reliability Guarantees [83.80644194980042]
We show how to return a set of items rigorously guaranteed to contain mostly good items.
Our procedure endows any ranking model with rigorous finite-sample control of the false discovery rate.
We evaluate our methods on the Yahoo! Learning to Rank and MSMarco datasets.
arXiv Detail & Related papers (2022-07-04T17:49:25Z) - Set2setRank: Collaborative Set to Set Ranking for Implicit Feedback
based Recommendation [59.183016033308014]
In this paper, we explore the unique characteristics of the implicit feedback and propose Set2setRank framework for recommendation.
Our proposed framework is model-agnostic and can be easily applied to most recommendation prediction approaches.
arXiv Detail & Related papers (2021-05-16T08:06:22Z) - A Differentiable Ranking Metric Using Relaxed Sorting Operation for
Top-K Recommender Systems [1.2617078020344619]
A recommender system generates personalized recommendations by computing the preference score of items, sorting the items according to the score, and filtering top-K items with high scores.
While sorting and ranking items are integral for this recommendation procedure, it is nontrivial to incorporate them in the process of end-to-end model training.
This incurs the inconsistency issue between existing learning objectives and ranking metrics of recommenders.
We present DRM that mitigates the inconsistency and improves recommendation performance by employing the differentiable relaxation of ranking metrics.
arXiv Detail & Related papers (2020-08-30T10:57:33Z) - Document Ranking with a Pretrained Sequence-to-Sequence Model [56.44269917346376]
We show how a sequence-to-sequence model can be trained to generate relevance labels as "target words"
Our approach significantly outperforms an encoder-only model in a data-poor regime.
arXiv Detail & Related papers (2020-03-14T22:29:50Z) - SetRank: A Setwise Bayesian Approach for Collaborative Ranking from
Implicit Feedback [50.13745601531148]
We propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to accommodate the characteristics of implicit feedback in recommender system.
Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons.
We also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $sqrtM/N$.
arXiv Detail & Related papers (2020-02-23T06:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.