Calibrating the Predictions for Top-N Recommendations
- URL: http://arxiv.org/abs/2408.11596v1
- Date: Wed, 21 Aug 2024 13:06:28 GMT
- Title: Calibrating the Predictions for Top-N Recommendations
- Authors: Masahiro Sato,
- Abstract summary: We show that previous calibration methods result in miscalibrated predictions for the top-N items.
We propose a generic method to optimize calibration models focusing on the top-N items.
- Score: 3.176387928678296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Well-calibrated predictions of user preferences are essential for many applications. Since recommender systems typically select the top-N items for users, calibration for those top-N items, rather than for all items, is important. We show that previous calibration methods result in miscalibrated predictions for the top-N items, despite their excellent calibration performance when evaluated on all items. In this work, we address the miscalibration in the top-N recommended items. We first define evaluation metrics for this objective and then propose a generic method to optimize calibration models focusing on the top-N items. It groups the top-N items by their ranks and optimizes distinct calibration models for each group with rank-dependent training weights. We verify the effectiveness of the proposed method for both explicit and implicit feedback datasets, using diverse classes of recommender models.
Related papers
- Repeat-bias-aware Optimization of Beyond-accuracy Metrics for Next Basket Recommendation [54.5376993040561]
In next basket recommendation (NBR) a set of items is recommended to users based on their historical basket sequences.
Some state-of-the-art NBR methods are heavily biased to recommend repeat items so as to maximize utility.
We find that only optimizing diversity or item fairness without considering repeat bias may cause NBR algorithms to recommend more repeat items.
arXiv Detail & Related papers (2025-01-10T21:58:34Z) - Preference Diffusion for Recommendation [50.8692409346126]
We propose PreferDiff, a tailored optimization objective for DM-based recommenders.
PreferDiff transforms BPR into a log-likelihood ranking objective to better capture user preferences.
It is the first personalized ranking loss designed specifically for DM-based recommenders.
arXiv Detail & Related papers (2024-10-17T01:02:04Z) - Improved Estimation of Ranks for Learning Item Recommenders with Negative Sampling [4.316676800486521]
In recommendation systems, there has been a growth in the number of recommendable items.
To lower this cost, it has become common to sample negative items.
In this work, we demonstrate the benefits from correcting the bias introduced by sampling of negatives.
arXiv Detail & Related papers (2024-10-08T21:09:55Z) - Aligning GPTRec with Beyond-Accuracy Goals with Reinforcement Learning [67.71952251641545]
GPTRec is an alternative to the Top-K model for item-by-item recommendations.
We show that GPTRec offers a better tradeoff between accuracy and secondary metrics than classic greedy re-ranking techniques.
Our experiments on two datasets show that GPTRec's Next-K generation approach offers a better tradeoff between accuracy and secondary metrics than classic greedy re-ranking techniques.
arXiv Detail & Related papers (2024-03-07T19:47:48Z) - Recommendation Systems with Distribution-Free Reliability Guarantees [83.80644194980042]
We show how to return a set of items rigorously guaranteed to contain mostly good items.
Our procedure endows any ranking model with rigorous finite-sample control of the false discovery rate.
We evaluate our methods on the Yahoo! Learning to Rank and MSMarco datasets.
arXiv Detail & Related papers (2022-07-04T17:49:25Z) - PEAR: Personalized Re-ranking with Contextualized Transformer for
Recommendation [48.17295872384401]
We present a personalized re-ranking model (dubbed PEAR) based on contextualized transformer.
PEAR makes several major improvements over the existing methods.
We also augment the training of PEAR with a list-level classification task to assess users' satisfaction on the whole ranking list.
arXiv Detail & Related papers (2022-03-23T08:29:46Z) - Set2setRank: Collaborative Set to Set Ranking for Implicit Feedback
based Recommendation [59.183016033308014]
In this paper, we explore the unique characteristics of the implicit feedback and propose Set2setRank framework for recommendation.
Our proposed framework is model-agnostic and can be easily applied to most recommendation prediction approaches.
arXiv Detail & Related papers (2021-05-16T08:06:22Z) - Dynamic-K Recommendation with Personalized Decision Boundary [41.70842736417849]
We develop a dynamic-K recommendation task as a joint learning problem with both ranking and classification objectives.
We extend two state-of-the-art ranking-based recommendation methods, i.e., BPRMF and HRM, to the corresponding dynamic-K versions.
Our experimental results on two datasets show that the dynamic-K models are more effective than the original fixed-N recommendation methods.
arXiv Detail & Related papers (2020-12-25T13:02:57Z) - A Differentiable Ranking Metric Using Relaxed Sorting Operation for
Top-K Recommender Systems [1.2617078020344619]
A recommender system generates personalized recommendations by computing the preference score of items, sorting the items according to the score, and filtering top-K items with high scores.
While sorting and ranking items are integral for this recommendation procedure, it is nontrivial to incorporate them in the process of end-to-end model training.
This incurs the inconsistency issue between existing learning objectives and ranking metrics of recommenders.
We present DRM that mitigates the inconsistency and improves recommendation performance by employing the differentiable relaxation of ranking metrics.
arXiv Detail & Related papers (2020-08-30T10:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.