Mitigating Recommendation Biases via Group-Alignment and Global-Uniformity in Representation Learning
- URL: http://arxiv.org/abs/2511.13041v1
- Date: Mon, 17 Nov 2025 06:42:29 GMT
- Title: Mitigating Recommendation Biases via Group-Alignment and Global-Uniformity in Representation Learning
- Authors: Miaomiao Cai, Min Hou, Lei Chen, Le Wu, Haoyue Bai, Yong Li, Meng Wang,
- Abstract summary: Collaborative Filtering(CF) plays a crucial role in modern recommender systems.<n>CF-based methods often encounter biases due to imbalances in training data.<n>We propose a framework to alleviate biases in recommendation from the perspective of representation distribution.
- Score: 31.00491858291777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative Filtering~(CF) plays a crucial role in modern recommender systems, leveraging historical user-item interactions to provide personalized suggestions. However, CF-based methods often encounter biases due to imbalances in training data. This phenomenon makes CF-based methods tend to prioritize recommending popular items and performing unsatisfactorily on inactive users. Existing works address this issue by rebalancing training samples, reranking recommendation results, or making the modeling process robust to the bias. Despite their effectiveness, these approaches can compromise accuracy or be sensitive to weighting strategies, making them challenging to train. In this paper, we deeply analyze the causes and effects of the biases and propose a framework to alleviate biases in recommendation from the perspective of representation distribution, namely Group-Alignment and Global-Uniformity Enhanced Representation Learning for Debiasing Recommendation (AURL). Specifically, we identify two significant problems in the representation distribution of users and items, namely group-discrepancy and global-collapse. These two problems directly lead to biases in the recommendation results. To this end, we propose two simple but effective regularizers in the representation space, respectively named group-alignment and global-uniformity. The goal of group-alignment is to bring the representation distribution of long-tail entities closer to that of popular entities, while global-uniformity aims to preserve the information of entities as much as possible by evenly distributing representations. Our method directly optimizes both the group-alignment and global-uniformity regularization terms to mitigate recommendation biases. Extensive experiments on three real datasets and various recommendation backbones verify the superiority of our proposed framework.
Related papers
- The Unfairness of Multifactorial Bias in Recommendation [68.35079031029616]
Popularity bias and positivity bias are prominent sources of bias in recommender systems.<n>In this work, we examine how multifactorial bias influences item-side fairness.<n>We adapt a percentile-based rating transformation as a pre-processing strategy to mitigate multifactorial bias.
arXiv Detail & Related papers (2026-01-19T08:37:43Z) - Towards Reliable and Holistic Visual In-Context Learning Prompt Selection [82.23704441763651]
Visual In-Context Learning (VICL) has emerged as a prominent approach for adapting visual foundation models to novel tasks.<n>VICL methods, such as Partial2Global and VPR, are grounded in the similarity-priority assumption that images more visually similar to a query image serve as better in-context examples.<n>This paper introduces an enhanced variant of Partial2Global designed for reliable and holistic selection of in-context examples in VICL.
arXiv Detail & Related papers (2025-09-30T09:23:12Z) - Improving Recommendation Fairness via Graph Structure and Representation Augmentation [9.754198447907779]
Graph Convolutional Networks (GCNs) have become increasingly popular in recommendation systems.<n>Recent studies have shown that GCN-based models will cause sensitive information to disseminate widely in the graph structure.<n>We propose a dual data augmentation framework for fair recommendation, which includes two data augmentation strategies to generate fair augmented graphs and feature representations.
arXiv Detail & Related papers (2025-08-27T03:41:01Z) - Class-Conditional Distribution Balancing for Group Robust Classification [11.525201208566925]
Spurious correlations that lead models to correct predictions for the wrong reasons pose a critical challenge for robust real-world generalization.<n>We offer a novel perspective by reframing the spurious correlations as imbalances or mismatches in class-conditional distributions.<n>We propose a simple yet effective robust learning method that eliminates the need for both bias annotations and predictions.
arXiv Detail & Related papers (2025-04-24T07:15:53Z) - Curriculum-enhanced GroupDRO: Challenging the Norm of Avoiding Curriculum Learning in Subpopulation Shift Setups [2.719510212909501]
In subpopulation shift scenarios, a Curriculum Learning (CL) approach would only serve to imprint the model weights, early on, with the easily learnable spurious correlations.
We propose a Curriculum-enhanced Group Distributionally Robust Optimization (CeGDRO) approach, which prioritizes the hardest bias-confirming samples and the easiest bias-conflicting samples.
arXiv Detail & Related papers (2024-11-22T13:38:56Z) - Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs [57.35402286842029]
We propose a novel Aligned Dual Dual (A-FedPD) method, which constructs virtual dual align global and local clients.<n>We provide a comprehensive analysis of the A-FedPD method's efficiency for those protracted unicipated security consensus.
arXiv Detail & Related papers (2024-09-27T17:00:32Z) - Emulating Full Participation: An Effective and Fair Client Selection Strategy for Federated Learning [50.060154488277036]
In federated learning, client selection is a critical problem that significantly impacts both model performance and fairness.<n>We propose two guiding principles that tackle the inherent conflict between the two metrics while reinforcing each other.<n>Our approach adaptively enhances this diversity by selecting clients based on their data distributions, thereby improving both model performance and fairness.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Group Robust Classification Without Any Group Information [5.053622900542495]
This study contends that current bias-unsupervised approaches to group robustness continue to rely on group information to achieve optimal performance.
bias labels are still crucial for effective model selection, restricting the practicality of these methods in real-world scenarios.
We propose a revised methodology for training and validating debiased models in an entirely bias-unsupervised manner.
arXiv Detail & Related papers (2023-10-28T01:29:18Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - ICPE: An Item Cluster-Wise Pareto-Efficient Framework for Recommendation Debiasing [7.100121083949393]
In this work, we explore the central theme of recommendation debiasing from an item cluster-wise multi-objective optimization perspective.<n>Aiming to balance the learning on various item clusters that differ in popularity during the training process, we propose a model-agnostic framework namely Item Cluster-Wise.<n>In detail, we define our item cluster-wise optimization target as the recommender model should balance all item clusters that differ in popularity.
arXiv Detail & Related papers (2021-09-27T09:17:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.