Recommendation Systems with Distribution-Free Reliability Guarantees
- URL: http://arxiv.org/abs/2207.01609v1
- Date: Mon, 4 Jul 2022 17:49:25 GMT
- Title: Recommendation Systems with Distribution-Free Reliability Guarantees
- Authors: Anastasios N. Angelopoulos, Karl Krauth, Stephen Bates, Yixin Wang,
Michael I. Jordan
- Abstract summary: We show how to return a set of items rigorously guaranteed to contain mostly good items.
Our procedure endows any ranking model with rigorous finite-sample control of the false discovery rate.
We evaluate our methods on the Yahoo! Learning to Rank and MSMarco datasets.
- Score: 83.80644194980042
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When building recommendation systems, we seek to output a helpful set of
items to the user. Under the hood, a ranking model predicts which of two
candidate items is better, and we must distill these pairwise comparisons into
the user-facing output. However, a learned ranking model is never perfect, so
taking its predictions at face value gives no guarantee that the user-facing
output is reliable. Building from a pre-trained ranking model, we show how to
return a set of items that is rigorously guaranteed to contain mostly good
items. Our procedure endows any ranking model with rigorous finite-sample
control of the false discovery rate (FDR), regardless of the (unknown) data
distribution. Moreover, our calibration algorithm enables the easy and
principled integration of multiple objectives in recommender systems. As an
example, we show how to optimize for recommendation diversity subject to a
user-specified level of FDR control, circumventing the need to specify ad hoc
weights of a diversity loss against an accuracy loss. Throughout, we focus on
the problem of learning to rank a set of possible recommendations, evaluating
our methods on the Yahoo! Learning to Rank and MSMarco datasets.
Related papers
- Can Large Language Models Understand Preferences in Personalized Recommendation? [32.2250928311146]
We introduce PerRecBench, disassociating evaluation from user rating bias and item quality.
We find that the LLM-based recommendation techniques that are generally good at rating prediction fail to identify users' favored and disfavored items when the user rating bias and item quality are eliminated.
Our findings reveal the superiority of pairwise and listwise ranking approaches over pointwise ranking, PerRecBench's low correlation with traditional regression metrics, the importance of user profiles, and the role of pretraining data distributions.
arXiv Detail & Related papers (2025-01-23T05:24:18Z) - Preference Diffusion for Recommendation [50.8692409346126]
We propose PreferDiff, a tailored optimization objective for DM-based recommenders.
PreferDiff transforms BPR into a log-likelihood ranking objective to better capture user preferences.
It is the first personalized ranking loss designed specifically for DM-based recommenders.
arXiv Detail & Related papers (2024-10-17T01:02:04Z) - Aligning GPTRec with Beyond-Accuracy Goals with Reinforcement Learning [67.71952251641545]
GPTRec is an alternative to the Top-K model for item-by-item recommendations.
We show that GPTRec offers a better tradeoff between accuracy and secondary metrics than classic greedy re-ranking techniques.
Our experiments on two datasets show that GPTRec's Next-K generation approach offers a better tradeoff between accuracy and secondary metrics than classic greedy re-ranking techniques.
arXiv Detail & Related papers (2024-03-07T19:47:48Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Introducing a Framework and a Decision Protocol to Calibrate Recommender
Systems [0.0]
This paper proposes an approach to create recommendation lists with a calibrated balance of genres.
The main claim is that calibration can contribute positively to generate fairer recommendations.
We propose a conceptual framework and a decision protocol to generate more than one thousand combinations of calibrated systems.
arXiv Detail & Related papers (2022-04-07T19:30:55Z) - Unbiased Pairwise Learning to Rank in Recommender Systems [4.058828240864671]
Unbiased learning to rank algorithms are appealing candidates and have already been applied in many applications with single categorical labels.
We propose a novel unbiased LTR algorithm to tackle the challenges, which innovatively models position bias in the pairwise fashion.
Experiment results on public benchmark datasets and internal live traffic show the superior results of the proposed method for both categorical and continuous labels.
arXiv Detail & Related papers (2021-11-25T06:04:59Z) - A Differentiable Ranking Metric Using Relaxed Sorting Operation for
Top-K Recommender Systems [1.2617078020344619]
A recommender system generates personalized recommendations by computing the preference score of items, sorting the items according to the score, and filtering top-K items with high scores.
While sorting and ranking items are integral for this recommendation procedure, it is nontrivial to incorporate them in the process of end-to-end model training.
This incurs the inconsistency issue between existing learning objectives and ranking metrics of recommenders.
We present DRM that mitigates the inconsistency and improves recommendation performance by employing the differentiable relaxation of ranking metrics.
arXiv Detail & Related papers (2020-08-30T10:57:33Z) - DeepFair: Deep Learning for Improving Fairness in Recommender Systems [63.732639864601914]
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations.
We propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users.
arXiv Detail & Related papers (2020-06-09T13:39:38Z) - SetRank: A Setwise Bayesian Approach for Collaborative Ranking from
Implicit Feedback [50.13745601531148]
We propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to accommodate the characteristics of implicit feedback in recommender system.
Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons.
We also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $sqrtM/N$.
arXiv Detail & Related papers (2020-02-23T06:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.