Preference Diffusion for Recommendation
- URL: http://arxiv.org/abs/2410.13117v1
- Date: Thu, 17 Oct 2024 01:02:04 GMT
- Title: Preference Diffusion for Recommendation
- Authors: Shuo Liu, An Zhang, Guoqing Hu, Hong Qian, Tat-seng Chua,
- Abstract summary: We propose PreferDiff, a tailored optimization objective for DM-based recommenders.
PreferDiff transforms BPR into a log-likelihood ranking objective to better capture user preferences.
It is the first personalized ranking loss designed specifically for DM-based recommenders.
- Score: 50.8692409346126
- License:
- Abstract: Recommender systems predict personalized item rankings based on user preference distributions derived from historical behavior data. Recently, diffusion models (DMs) have gained attention in recommendation for their ability to model complex distributions, yet current DM-based recommenders often rely on traditional objectives like mean squared error (MSE) or recommendation objectives, which are not optimized for personalized ranking tasks or fail to fully leverage DM's generative potential. To address this, we propose PreferDiff, a tailored optimization objective for DM-based recommenders. PreferDiff transforms BPR into a log-likelihood ranking objective and integrates multiple negative samples to better capture user preferences. Specifically, we employ variational inference to handle the intractability through minimizing the variational upper bound and replaces MSE with cosine error to improve alignment with recommendation tasks. Finally, we balance learning generation and preference to enhance the training stability of DMs. PreferDiff offers three key benefits: it is the first personalized ranking loss designed specifically for DM-based recommenders and it improves ranking and faster convergence by addressing hard negatives. We also prove that it is theoretically connected to Direct Preference Optimization which indicates that it has the potential to align user preferences in DM-based recommenders via generative modeling. Extensive experiments across three benchmarks validate its superior recommendation performance and commendable general sequential recommendation capabilities. Our codes are available at \url{https://github.com/lswhim/PreferDiff}.
Related papers
- Ordinal Preference Optimization: Aligning Human Preferences via NDCG [28.745322441961438]
We develop an end-to-end preference optimization algorithm by approxing NDCG with a differentiable surrogate loss.
OPO outperforms existing pairwise and listwise approaches on evaluation sets and general benchmarks like AlpacaEval.
arXiv Detail & Related papers (2024-10-06T03:49:28Z) - Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization [75.1240295759264]
We propose an effective framework for Bridging and Modeling Correlations in pairwise data, named BMC.
We increase the consistency and informativeness of the pairwise preference signals through targeted modifications.
We identify that DPO alone is insufficient to model these correlations and capture nuanced variations.
arXiv Detail & Related papers (2024-08-14T11:29:47Z) - On Softmax Direct Preference Optimization for Recommendation [50.896117978746]
We propose Softmax-DPO (S-DPO) to instill ranking information into the LM to help LM-based recommenders distinguish preferred items from negatives.
Specifically, we incorporate multiple negatives in user preference data and devise an alternative version of DPO loss tailored for LM-based recommenders.
arXiv Detail & Related papers (2024-06-13T15:16:11Z) - Comparing Bad Apples to Good Oranges: Aligning Large Language Models via Joint Preference Optimization [105.3612692153615]
A common technique for aligning large language models (LLMs) relies on acquiring human preferences.
We propose a new axis that is based on eliciting preferences jointly over the instruction-response pairs.
We find that joint preferences over instruction and response pairs can significantly enhance the alignment of LLMs.
arXiv Detail & Related papers (2024-03-31T02:05:40Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Recommendation Systems with Distribution-Free Reliability Guarantees [83.80644194980042]
We show how to return a set of items rigorously guaranteed to contain mostly good items.
Our procedure endows any ranking model with rigorous finite-sample control of the false discovery rate.
We evaluate our methods on the Yahoo! Learning to Rank and MSMarco datasets.
arXiv Detail & Related papers (2022-07-04T17:49:25Z) - A Differentiable Ranking Metric Using Relaxed Sorting Operation for
Top-K Recommender Systems [1.2617078020344619]
A recommender system generates personalized recommendations by computing the preference score of items, sorting the items according to the score, and filtering top-K items with high scores.
While sorting and ranking items are integral for this recommendation procedure, it is nontrivial to incorporate them in the process of end-to-end model training.
This incurs the inconsistency issue between existing learning objectives and ranking metrics of recommenders.
We present DRM that mitigates the inconsistency and improves recommendation performance by employing the differentiable relaxation of ranking metrics.
arXiv Detail & Related papers (2020-08-30T10:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.