Contrastive Learning for Debiased Candidate Generation in Large-Scale
Recommender Systems
- URL: http://arxiv.org/abs/2005.12964v9
- Date: Fri, 4 Jun 2021 16:34:46 GMT
- Title: Contrastive Learning for Debiased Candidate Generation in Large-Scale
Recommender Systems
- Authors: Chang Zhou, Jianxin Ma, Jianwei Zhang, Jingren Zhou, Hongxia Yang
- Abstract summary: We show that a popular choice of contrastive loss is equivalent to reducing the exposure bias via inverse propensity weighting.
We further improve upon CLRec and propose Multi-CLRec, for accurate multi-intention aware bias reduction.
Our methods have been successfully deployed in Taobao, where at least four-month online A/B tests and offline analyses demonstrate its substantial improvements.
- Score: 84.3996727203154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep candidate generation (DCG) that narrows down the collection of relevant
items from billions to hundreds via representation learning has become
prevalent in industrial recommender systems. Standard approaches approximate
maximum likelihood estimation (MLE) through sampling for better scalability and
address the problem of DCG in a way similar to language modeling. However, live
recommender systems face severe exposure bias and have a vocabulary several
orders of magnitude larger than that of natural language, implying that MLE
will preserve and even exacerbate the exposure bias in the long run in order to
faithfully fit the observed samples. In this paper, we theoretically prove that
a popular choice of contrastive loss is equivalent to reducing the exposure
bias via inverse propensity weighting, which provides a new perspective for
understanding the effectiveness of contrastive learning. Based on the
theoretical discovery, we design CLRec, a contrastive learning method to
improve DCG in terms of fairness, effectiveness and efficiency in recommender
systems with extremely large candidate size. We further improve upon CLRec and
propose Multi-CLRec, for accurate multi-intention aware bias reduction. Our
methods have been successfully deployed in Taobao, where at least four-month
online A/B tests and offline analyses demonstrate its substantial improvements,
including a dramatic reduction in the Matthew effect.
Related papers
- Correcting for Popularity Bias in Recommender Systems via Item Loss Equalization [1.7771454131646311]
A small set of popular items dominate the recommendation results due to their high interaction rates.
This phenomenon disproportionately benefits users with mainstream tastes while neglecting those with niche interests.
We propose an in-processing approach to address this issue by intervening in the training process of recommendation models.
arXiv Detail & Related papers (2024-10-07T08:34:18Z) - Self-supervised Preference Optimization: Enhance Your Language Model with Preference Degree Awareness [27.43137305486112]
We propose a novel Self-supervised Preference Optimization (SPO) framework, which constructs a self-supervised preference degree loss combined with the alignment loss.
The results demonstrate that SPO can be seamlessly integrated with existing preference optimization methods to achieve state-of-the-art performance.
arXiv Detail & Related papers (2024-09-26T12:37:26Z) - Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data [102.16105233826917]
Learning from preference labels plays a crucial role in fine-tuning large language models.
There are several distinct approaches for preference fine-tuning, including supervised learning, on-policy reinforcement learning (RL), and contrastive learning.
arXiv Detail & Related papers (2024-04-22T17:20:18Z) - BECLR: Batch Enhanced Contrastive Few-Shot Learning [1.450405446885067]
Unsupervised few-shot learning aspires to bridge this gap by discarding the reliance on annotations at training time.
We propose a novel Dynamic Clustered mEmory (DyCE) module to promote a highly separable latent representation space.
We then tackle the, somehow overlooked yet critical, issue of sample bias at the few-shot inference stage.
arXiv Detail & Related papers (2024-02-04T10:52:43Z) - Understanding Biases in ChatGPT-based Recommender Systems: Provider Fairness, Temporal Stability, and Recency [9.882829614199453]
This paper explores the biases in ChatGPT-based recommender systems, focusing on provider fairness (item-side fairness)
In the first experiment, we assess seven distinct prompt scenarios on top-K recommendation accuracy and fairness.
Embedding fairness into system roles, such as "act as a fair recommender," proved more effective than fairness directives within prompts.
arXiv Detail & Related papers (2024-01-19T08:09:20Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z) - Learning the Truth From Only One Side of the Story [58.65439277460011]
We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution.
We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically.
arXiv Detail & Related papers (2020-06-08T18:20:28Z) - Accelerated Convergence for Counterfactual Learning to Rank [65.63997193915257]
We show that convergence rate of SGD approaches with IPS-weighted gradients suffers from the large variance introduced by the IPS weights.
We propose a novel learning algorithm, called CounterSample, that has provably better convergence than standard IPS-weighted gradient descent methods.
We prove that CounterSample converges faster and complement our theoretical findings with empirical results.
arXiv Detail & Related papers (2020-05-21T12:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.