Post-hoc Popularity Bias Correction in GNN-based Collaborative Filtering
- URL: http://arxiv.org/abs/2510.12959v1
- Date: Tue, 14 Oct 2025 20:10:08 GMT
- Title: Post-hoc Popularity Bias Correction in GNN-based Collaborative Filtering
- Authors: Md Aminul Islam, Elena Zheleva, Ren Wang,
- Abstract summary: We propose a Post-hoc Popularity Debiasing (PPD) method that corrects for popularity bias in collaborative filtering (CF)<n>Our method outperforms state-of-the-art approaches for popularity bias correction in GNN-based CF.
- Score: 7.582801684816001
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: User historical interaction data is the primary signal for learning user preferences in collaborative filtering (CF). However, the training data often exhibits a long-tailed distribution, where only a few items have the majority of interactions. CF models trained directly on such imbalanced data are prone to learning popularity bias, which reduces personalization and leads to suboptimal recommendation quality. Graph Neural Networks (GNNs), while effective for CF due to their message passing mechanism, can further propagate and amplify popularity bias through their aggregation process. Existing approaches typically address popularity bias by modifying training objectives but fail to directly counteract the bias propagated during GNN's neighborhood aggregation. Applying weights to interactions during aggregation can help alleviate this problem, yet it risks distorting model learning due to unstable node representations in the early stages of training. In this paper, we propose a Post-hoc Popularity Debiasing (PPD) method that corrects for popularity bias in GNN-based CF and operates directly on pre-trained embeddings without requiring retraining. By estimating interaction-level popularity and removing popularity components from node representations via a popularity direction vector, PPD reduces bias while preserving user preferences. Experimental results show that our method outperforms state-of-the-art approaches for popularity bias correction in GNN-based CF.
Related papers
- Rethinking Popularity Bias in Collaborative Filtering via Analytical Vector Decomposition [37.76421221387847]
Popularity bias is an intrinsic geometric artifact of Bayesian Pairwise Ranking (BPR) optimization.<n>We propose Directional Decomposition and Correction (DDC) to correct this embedding geometry.<n>DDC guides positive interactions along personalized preference directions while steering negative interactions away from the global popularity direction.
arXiv Detail & Related papers (2025-12-11T14:35:13Z) - Causality-aware Graph Aggregation Weight Estimator for Popularity Debiasing in Top-K Recommendation [20.03303865662207]
Graph-based recommender systems leverage neighborhood aggregation to generate node representations, which is sensitive to popularity bias.<n>Existing graph-based debiasing solutions refine the aggregation process with attempts such as edge reconstruction or weight adjustment.<n>We propose a novel approach to mitigate popularity bias through rational modeling of the graph aggregation process.
arXiv Detail & Related papers (2025-10-06T05:33:37Z) - PBiLoss: Popularity-Aware Regularization to Improve Fairness in Graph-Based Recommender Systems [1.0128808054306186]
We propose PBiLoss, a regularization-based loss function designed to counteract popularity bias in graph-based recommender models explicitly.<n>We show that PBiLoss significantly improves fairness, as demonstrated by reductions in the Popularity-Rank Correlation for Users (PRU) and Popularity-Rank Correlation for Items (PRI)
arXiv Detail & Related papers (2025-07-25T08:29:32Z) - Variational Bayesian Personalized Ranking [39.24591060825056]
Variational BPR is a novel and easily implementable learning objective that integrates likelihood optimization, noise reduction, and popularity debiasing.<n>We introduce an attention-based latent interest prototype contrastive mechanism, replacing instance-level contrastive learning, to effectively reduce noise from problematic samples.<n> Empirically, we demonstrate the effectiveness of Variational BPR on popular backbone recommendation models.
arXiv Detail & Related papers (2025-03-14T04:22:01Z) - Federated Class-Incremental Learning with Hierarchical Generative Prototypes [10.532838477096055]
Federated Learning (FL) aims at unburdening the training of deep models by distributing computation across multiple devices (clients)<n>Our proposal constrains both biases in the last layer by efficiently finetuning a pre-trained backbone using learnable prompts.<n>Our method significantly improves the current State Of The Art, providing an average increase of +7.8% in accuracy.
arXiv Detail & Related papers (2024-06-04T16:12:27Z) - Going Beyond Popularity and Positivity Bias: Correcting for Multifactorial Bias in Recommender Systems [74.47680026838128]
Two typical forms of bias in user interaction data with recommender systems (RSs) are popularity bias and positivity bias.
We consider multifactorial selection bias affected by both item and rating value factors.
We propose smoothing and alternating gradient descent techniques to reduce variance and improve the robustness of its optimization.
arXiv Detail & Related papers (2024-04-29T12:18:21Z) - Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - REST: Enhancing Group Robustness in DNNs through Reweighted Sparse
Training [49.581884130880944]
Deep neural network (DNN) has been proven effective in various domains.
However, they often struggle to perform well on certain minority groups during inference.
arXiv Detail & Related papers (2023-12-05T16:27:54Z) - Robust Collaborative Filtering to Popularity Distribution Shift [56.78171423428719]
We present a simple yet effective debiasing strategy, PopGo, which quantifies and reduces the interaction-wise popularity shortcut without assumptions on the test data.
On both ID and OOD test sets, PopGo achieves significant gains over the state-of-the-art debiasing strategies.
arXiv Detail & Related papers (2023-10-16T04:20:52Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.