Improving Recommendation Fairness via Graph Structure and Representation Augmentation
- URL: http://arxiv.org/abs/2508.19547v1
- Date: Wed, 27 Aug 2025 03:41:01 GMT
- Title: Improving Recommendation Fairness via Graph Structure and Representation Augmentation
- Authors: Tongxin Xu, Wenqiang Liu, Chenzhong Bin, Cihan Xiao, Zhixin Zeng, Tianlong Gu,
- Abstract summary: Graph Convolutional Networks (GCNs) have become increasingly popular in recommendation systems.<n>Recent studies have shown that GCN-based models will cause sensitive information to disseminate widely in the graph structure.<n>We propose a dual data augmentation framework for fair recommendation, which includes two data augmentation strategies to generate fair augmented graphs and feature representations.
- Score: 9.754198447907779
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Convolutional Networks (GCNs) have become increasingly popular in recommendation systems. However, recent studies have shown that GCN-based models will cause sensitive information to disseminate widely in the graph structure, amplifying data bias and raising fairness concerns. While various fairness methods have been proposed, most of them neglect the impact of biased data on representation learning, which results in limited fairness improvement. Moreover, some studies have focused on constructing fair and balanced data distributions through data augmentation, but these methods significantly reduce utility due to disruption of user preferences. In this paper, we aim to design a fair recommendation method from the perspective of data augmentation to improve fairness while preserving recommendation utility. To achieve fairness-aware data augmentation with minimal disruption to user preferences, we propose two prior hypotheses. The first hypothesis identifies sensitive interactions by comparing outcomes of performance-oriented and fairness-aware recommendations, while the second one focuses on detecting sensitive features by analyzing feature similarities between biased and debiased representations. Then, we propose a dual data augmentation framework for fair recommendation, which includes two data augmentation strategies to generate fair augmented graphs and feature representations. Furthermore, we introduce a debiasing learning method that minimizes the dependence between the learned representations and sensitive information to eliminate bias. Extensive experiments on two real-world datasets demonstrate the superiority of our proposed framework.
Related papers
- The Unfairness of Multifactorial Bias in Recommendation [68.35079031029616]
Popularity bias and positivity bias are prominent sources of bias in recommender systems.<n>In this work, we examine how multifactorial bias influences item-side fairness.<n>We adapt a percentile-based rating transformation as a pre-processing strategy to mitigate multifactorial bias.
arXiv Detail & Related papers (2026-01-19T08:37:43Z) - FROG: Fair Removal on Graphs [27.5582982873392]
We propose a novel approach that jointly optimize the graph structure and the corresponding model for fair unlearning tasks.<n>Specifically,our approach rewires the graph to enhance unlearning efficiency by removing redundant edges that hinder forgetting.<n>We introduce a worst-case evaluation mechanism to assess the reliability of fair unlearning performance.
arXiv Detail & Related papers (2025-03-23T20:39:53Z) - FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - How Fair is Your Diffusion Recommender Model? [17.321932595953527]
We propose the first empirical study of fairness for DiffRec, the pioneer technique in diffusion-based recommendation.<n>Our study involves DiffRec and its variant L-DiffRec, tested against nine recommender systems on two benchmarking datasets.<n>While showing worrying trends in alignment with the more general machine-learning literature on diffusion models, our results also indicate promising directions to address the unfairness issue in future work.
arXiv Detail & Related papers (2024-09-06T15:17:40Z) - Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.<n>We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.<n>We also present an effective and robust post-processing technique, Bias Pruning with Fair Activations (BPFA), which improves fairness without requiring retraining or weight updates.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Fair Node Representation Learning via Adaptive Data Augmentation [9.492903649862761]
This work theoretically explains the sources of bias in node representations obtained via Graph Neural Networks (GNNs)
Building upon the analysis, fairness-aware data augmentation frameworks are developed to reduce the intrinsic bias.
Our analysis and proposed schemes can be readily employed to enhance the fairness of various GNN-based learning mechanisms.
arXiv Detail & Related papers (2022-01-21T05:49:15Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.