TPRM: A Topic-based Personalized Ranking Model for Web Search
- URL: http://arxiv.org/abs/2108.06014v1
- Date: Fri, 13 Aug 2021 01:16:55 GMT
- Title: TPRM: A Topic-based Personalized Ranking Model for Web Search
- Authors: Minghui Huang, Wei Peng and Dong Wang
- Abstract summary: We propose a topic-based personalized ranking model (TPRM) that integrates user topical profile with pretrained contextualized term representations to tailor the general document ranking list.
Experiments on the real-world dataset demonstrate that TPRM outperforms state-of-the-art ad-hoc ranking models and personalized ranking models significantly.
- Score: 9.032465976745305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ranking models have achieved promising results, but it remains challenging to
design personalized ranking systems to leverage user profiles and semantic
representations between queries and documents. In this paper, we propose a
topic-based personalized ranking model (TPRM) that integrates user topical
profile with pretrained contextualized term representations to tailor the
general document ranking list. Experiments on the real-world dataset
demonstrate that TPRM outperforms state-of-the-art ad-hoc ranking models and
personalized ranking models significantly.
Related papers
- Statistical Models of Top-$k$ Partial Orders [7.121002367542985]
We introduce and taxonomize approaches for jointly modeling distributions over top-$k$ partial orders and list lengths $k$.
Using data consisting of partial rankings from San Francisco school choice and San Francisco ranked choice elections, we evaluate how well the models predict observed data.
arXiv Detail & Related papers (2024-06-22T17:04:24Z) - SCAR: Efficient Instruction-Tuning for Large Language Models via Style Consistency-Aware Response Ranking [56.93151679231602]
This research decomposes response style into presentation and composition styles.
We introduce Style Consistency-Aware Response Ranking (SCAR)
SCAR prioritizes instruction-response pairs in the training set based on their response stylistic consistency.
arXiv Detail & Related papers (2024-06-16T10:10:37Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Dirichlet-based Uncertainty Quantification for Personalized Federated
Learning with Improved Posterior Networks [9.54563359677778]
This paper presents a new approach to federated learning that allows selecting a model from global and personalized ones.
It is achieved through a careful modeling of predictive uncertainties that helps to detect local and global in- and out-of-distribution data.
The comprehensive experimental evaluation on the popular real-world image datasets shows the superior performance of the model in the presence of out-of-distribution data.
arXiv Detail & Related papers (2023-12-18T14:30:05Z) - A Unified Statistical Learning Model for Rankings and Scores with
Application to Grant Panel Review [1.240096657086732]
Rankings and scores are two common data types used by judges to express preferences and/or perceptions of quality in a collection of objects.
Numerous models exist to study data of each type separately, but no unified statistical model captures both data types simultaneously.
We propose the Mallows-Binomial model to close this gap, which combines a Mallows' $phi$ ranking model with Binomial score models.
arXiv Detail & Related papers (2022-01-07T16:56:52Z) - Modeling Relevance Ranking under the Pre-training and Fine-tuning
Paradigm [44.96049217770624]
We propose a novel ranking framework called Pre-Rank that takes both user's view and system's view into consideration.
To model the user's view of relevance, Pre-Rank pre-trains the initial query-document representations based on large-scale user activities data.
To model the system's view of relevance, Pre-Rank further fine-tunes the model on expert-labeled relevance data.
arXiv Detail & Related papers (2021-08-12T10:37:12Z) - Modeling User Behaviour in Research Paper Recommendation System [8.980876474818153]
A user intention model is proposed based on deep sequential topic analysis.
The model predicts a user's intention in terms of the topic of interest.
The proposed approach introduces a new road map to model a user activity suitable for the design of a research paper recommendation system.
arXiv Detail & Related papers (2021-07-16T11:31:03Z) - Incorporating Vision Bias into Click Models for Image-oriented Search
Engine [51.192784793764176]
In this paper, we assume that vision bias exists in an image-oriented search engine as another crucial factor affecting the examination probability aside from position.
We use regression-based EM algorithm to predict the vision bias given the visual features extracted from candidate documents.
arXiv Detail & Related papers (2021-01-07T10:01:31Z) - Overview of the TREC 2019 Fair Ranking Track [65.15263872493799]
The goal of the TREC Fair Ranking track was to develop a benchmark for evaluating retrieval systems in terms of fairness to different content providers.
This paper presents an overview of the track, including the task definition, descriptions of the data and the annotation process.
arXiv Detail & Related papers (2020-03-25T21:34:58Z) - Document Ranking with a Pretrained Sequence-to-Sequence Model [56.44269917346376]
We show how a sequence-to-sequence model can be trained to generate relevance labels as "target words"
Our approach significantly outperforms an encoder-only model in a data-poor regime.
arXiv Detail & Related papers (2020-03-14T22:29:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.