Privacy-Preserving Orthogonal Aggregation for Guaranteeing Gender Fairness in Federated Recommendation
- URL: http://arxiv.org/abs/2411.19678v1
- Date: Fri, 29 Nov 2024 13:12:11 GMT
- Title: Privacy-Preserving Orthogonal Aggregation for Guaranteeing Gender Fairness in Federated Recommendation
- Authors: Siqing Zhang, Yuchen Ding, Wei Tang, Wei Sun, Yong Liao, Peng Yuan Zhou,
- Abstract summary: We study whether federated recommendation systems can achieve group fairness under stringent privacy constraints.<n>We propose Privacy-Preserving Orthogonal Aggregation (PPOA), which employs the secure aggregation scheme and quantization technique.<n> Experimental results show PPOA enhances recommendation effectiveness for both females and males by up to 8.25% and 6.36%, respectively.
- Score: 18.123459468576648
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Under stringent privacy constraints, whether federated recommendation systems can achieve group fairness remains an inadequately explored question. Taking gender fairness as a representative issue, we identify three phenomena in federated recommendation systems: performance difference, data imbalance, and preference disparity. We discover that the state-of-the-art methods only focus on the first phenomenon. Consequently, their imposition of inappropriate fairness constraints detrimentally affects the model training. Moreover, due to insufficient sensitive attribute protection of existing works, we can infer the gender of all users with 99.90% accuracy even with the addition of maximal noise. In this work, we propose Privacy-Preserving Orthogonal Aggregation (PPOA), which employs the secure aggregation scheme and quantization technique, to prevent the suppression of minority groups by the majority and preserve the distinct preferences for better group fairness. PPOA can assist different groups in obtaining their respective model aggregation results through a designed orthogonal mapping while keeping their attributes private. Experimental results on three real-world datasets demonstrate that PPOA enhances recommendation effectiveness for both females and males by up to 8.25% and 6.36%, respectively, with a maximum overall improvement of 7.30%, and achieves optimal fairness in most cases. Extensive ablation experiments and visualizations indicate that PPOA successfully maintains preferences for different gender groups.
Related papers
- Fairness-Aware Grouping for Continuous Sensitive Variables: Application for Debiasing Face Analysis with respect to Skin Tone [3.3298048942057523]
We propose a fairness-based grouping approach for continuous (possibly multidimensional) sensitive attributes.<n>By grouping data according to observed levels of discrimination, our method identifies the partition that maximizes a novel criterion.<n>We validate the proposed approach using multiple synthetic datasets and demonstrate its robustness under changing population distributions.
arXiv Detail & Related papers (2025-07-15T12:21:52Z) - The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - Federated Fairness without Access to Sensitive Groups [12.888927461513472]
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
We propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels.
arXiv Detail & Related papers (2024-02-22T19:24:59Z) - MaxMin-RLHF: Alignment with Diverse Human Preferences [101.57443597426374]
Reinforcement Learning from Human Feedback (RLHF) aligns language models to human preferences by employing a singular reward model derived from preference data.
We learn a mixture of preference distributions via an expectation-maximization algorithm to better represent diverse human preferences.
Our algorithm achieves an average improvement of more than 16% in win-rates over conventional RLHF algorithms.
arXiv Detail & Related papers (2024-02-14T03:56:27Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - FairDP: Certified Fairness with Differential Privacy [55.51579601325759]
This paper introduces FairDP, a novel training mechanism designed to provide group fairness certification for the trained model's decisions.
The key idea of FairDP is to train models for distinct individual groups independently, add noise to each group's gradient for data privacy protection, and integrate knowledge from group models to formulate a model that balances privacy, utility, and fairness in downstream tasks.
arXiv Detail & Related papers (2023-05-25T21:07:20Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Equal Experience in Recommender Systems [21.298427869586686]
We introduce a novel fairness notion (that we call equal experience) to regulate unfairness in the presence of biased data.
We propose an optimization framework that incorporates the fairness notion as a regularization term, as well as introduce computationally-efficient algorithms that solve the optimization.
arXiv Detail & Related papers (2022-10-12T05:53:05Z) - Improved Approximation for Fair Correlation Clustering [4.629694186457133]
Correlation clustering is a ubiquitous paradigm in unsupervised machine learning where addressing unfairness is a major challenge.
Motivated by this, we study Fair Correlation Clustering where the data points may belong to different protected groups.
Our paper significantly generalizes and improves on the quality guarantees of previous work of Ahmadi et al. and Ahmadian et al.
arXiv Detail & Related papers (2022-06-09T03:07:57Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Post-Comparison Mitigation of Demographic Bias in Face Recognition Using
Fair Score Normalization [15.431761867166]
We propose a novel unsupervised fair score normalization approach to reduce the effect of bias in face recognition.
Our solution reduces demographic biases by up to 82.7% in the case when gender is considered.
In contrast to previous works, our fair normalization approach enhances the overall performance by up to 53.2% at false match rate of 0.001 and up to 82.9% at a false match rate of 0.00001.
arXiv Detail & Related papers (2020-02-10T08:17:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.