Cross Pairwise Ranking for Unbiased Item Recommendation
- URL: http://arxiv.org/abs/2204.12176v1
- Date: Tue, 26 Apr 2022 09:20:27 GMT
- Title: Cross Pairwise Ranking for Unbiased Item Recommendation
- Authors: Qi Wan, Xiangnan He, Xiang Wang, Jiancan Wu, Wei Guo, Ruiming Tang
- Abstract summary: We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
- Score: 57.71258289870123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most recommender systems optimize the model on observed interaction data,
which is affected by the previous exposure mechanism and exhibits many biases
like popularity bias. The loss functions, such as the mostly used pointwise
Binary Cross-Entropy and pairwise Bayesian Personalized Ranking, are not
designed to consider the biases in observed data. As a result, the model
optimized on the loss would inherit the data biases, or even worse, amplify the
biases. For example, a few popular items take up more and more exposure
opportunities, severely hurting the recommendation quality on niche items --
known as the notorious Mathew effect. In this work, we develop a new learning
paradigm named Cross Pairwise Ranking (CPR) that achieves unbiased
recommendation without knowing the exposure mechanism. Distinct from inverse
propensity scoring (IPS), we change the loss term of a sample -- we
innovatively sample multiple observed interactions once and form the loss as
the combination of their predictions. We prove in theory that this way offsets
the influence of user/item propensity on the learning, removing the influence
of data biases caused by the exposure mechanism. Advantageous to IPS, our
proposed CPR ensures unbiased learning for each training instance without the
need of setting the propensity scores. Experimental results demonstrate the
superiority of CPR over state-of-the-art debiasing solutions in both model
generalization and training efficiency. The codes are available at
https://github.com/Qcactus/CPR.
Related papers
- Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Bilateral Self-unbiased Learning from Biased Implicit Feedback [10.690479112143658]
We propose a novel unbiased recommender learning model, namely BIlateral SElf-unbiased Recommender (BISER)
BISER consists of two key components: (i) self-inverse propensity weighting (SIPW) to gradually mitigate the bias of items without incurring high computational costs; and (ii) bilateral unbiased learning (BU) to bridge the gap between two complementary models in model predictions.
Extensive experiments show that BISER consistently outperforms state-of-the-art unbiased recommender models over several datasets.
arXiv Detail & Related papers (2022-07-26T05:17:42Z) - Deep Causal Reasoning for Recommendations [47.83224399498504]
A new trend in recommender system research is to negate the influence of confounders from a causal perspective.
We model the recommendation as a multi-cause multi-outcome (MCMO) inference problem.
We show that MCMO modeling may lead to high variance due to scarce observations associated with the high-dimensional causal space.
arXiv Detail & Related papers (2022-01-06T15:00:01Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Unbiased Pairwise Learning to Rank in Recommender Systems [4.058828240864671]
Unbiased learning to rank algorithms are appealing candidates and have already been applied in many applications with single categorical labels.
We propose a novel unbiased LTR algorithm to tackle the challenges, which innovatively models position bias in the pairwise fashion.
Experiment results on public benchmark datasets and internal live traffic show the superior results of the proposed method for both categorical and continuous labels.
arXiv Detail & Related papers (2021-11-25T06:04:59Z) - Debiased Explainable Pairwise Ranking from Implicit Feedback [0.3867363075280543]
We focus on the state of the art pairwise ranking model, Bayesian Personalized Ranking (BPR)
BPR is a black box model that does not explain its outputs, thus limiting the user's trust in the recommendations.
We propose a novel explainable loss function and a corresponding Matrix Factorization-based model that generates recommendations along with item-based explanations.
arXiv Detail & Related papers (2021-07-30T17:19:37Z) - Understanding the Effects of Adversarial Personalized Ranking
Optimization Method on Recommendation Quality [6.197934754799158]
We model the learning characteristics of the Bayesian Personalized Ranking (BPR) and APR optimization frameworks.
We show that APR amplifies the popularity bias more than BPR due to an unbalanced number of received positive updates from short-head items.
arXiv Detail & Related papers (2021-07-29T10:22:20Z) - Improving Robustness by Augmenting Training Sentences with
Predicate-Argument Structures [62.562760228942054]
Existing approaches to improve robustness against dataset biases mostly focus on changing the training objective.
We propose to augment the input sentences in the training data with their corresponding predicate-argument structures.
We show that without targeting a specific bias, our sentence augmentation improves the robustness of transformer models against multiple biases.
arXiv Detail & Related papers (2020-10-23T16:22:05Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - Modeling and Counteracting Exposure Bias in Recommender Systems [0.0]
We study the bias inherent in widely used recommendation strategies such as matrix factorization.
We propose new debiasing strategies for recommender systems.
Our results show that recommender systems are biased and depend on the prior exposure of the user.
arXiv Detail & Related papers (2020-01-01T00:12:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.