PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning
- URL: http://arxiv.org/abs/2205.11584v1
- Date: Mon, 23 May 2022 19:26:12 GMT
- Title: PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning
- Authors: Sikha Pentyala, Nicola Neophytou, Anderson Nascimento, Martine De
Cock, Golnoosh Farnadi
- Abstract summary: Group fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires using the sensitive attribute values of all clients.
We show that this conflict between fairness and privacy in FL can be resolved by combining FL with Secure Multiparty Computation (MPC) and Differential Privacy (DP)
In doing so, we propose a method for training group-fair ML models in cross-device FL under complete and formal privacy guarantees.
- Score: 12.767527195281042
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Group fairness ensures that the outcome of machine learning (ML) based
decision making systems are not biased towards a certain group of people
defined by a sensitive attribute such as gender or ethnicity. Achieving group
fairness in Federated Learning (FL) is challenging because mitigating bias
inherently requires using the sensitive attribute values of all clients, while
FL is aimed precisely at protecting privacy by not giving access to the
clients' data. As we show in this paper, this conflict between fairness and
privacy in FL can be resolved by combining FL with Secure Multiparty
Computation (MPC) and Differential Privacy (DP). In doing so, we propose a
method for training group-fair ML models in cross-device FL under complete and
formal privacy guarantees, without requiring the clients to disclose their
sensitive attribute values.
Related papers
- Accuracy-Privacy Trade-off in the Mitigation of Membership Inference Attack in Federated Learning [4.152322723065285]
federated learning (FL) has emerged as a prominent method in machine learning, emphasizing privacy preservation by allowing multiple clients to collaboratively build a model while keeping their training data private.
Despite this focus on privacy, FL models are susceptible to various attacks, including membership inference attacks (MIAs)
arXiv Detail & Related papers (2024-07-26T22:44:41Z) - Multi-dimensional Fair Federated Learning [25.07463977553212]
Federated learning (FL) has emerged as a promising collaborative and secure paradigm for training a model from decentralized data.
Group fairness and client fairness are two dimensions of fairness that are important for FL.
We propose a method, called mFairFL, to achieve group fairness and client fairness simultaneously.
arXiv Detail & Related papers (2023-12-09T11:37:30Z) - Toward the Tradeoffs between Privacy, Fairness and Utility in Federated
Learning [10.473137837891162]
Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm.
We propose a privacy-protection fairness FL method to protect the privacy of the client model.
We conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.
arXiv Detail & Related papers (2023-11-30T02:19:35Z) - Advancing Personalized Federated Learning: Group Privacy, Fairness, and
Beyond [6.731000738818571]
Federated learning (FL) is a framework for training machine learning models in a distributed and collaborative manner.
In this paper, we address the triadic interaction among personalization, privacy guarantees, and fairness attained by models trained within the FL framework.
A method is put forth that introduces group privacy assurances through the utilization of $d$-privacy.
arXiv Detail & Related papers (2023-09-01T12:20:19Z) - Fair Differentially Private Federated Learning Framework [0.0]
Federated learning (FL) is a distributed machine learning strategy that enables participants to collaborate and train a shared model without sharing their individual datasets.
Privacy and fairness are crucial considerations in FL.
This paper presents a framework that addresses the challenges of generating a fair global model without validation data and creating a globally private differential model.
arXiv Detail & Related papers (2023-05-23T09:58:48Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.