Toward the Tradeoffs between Privacy, Fairness and Utility in Federated
Learning
- URL: http://arxiv.org/abs/2311.18190v1
- Date: Thu, 30 Nov 2023 02:19:35 GMT
- Title: Toward the Tradeoffs between Privacy, Fairness and Utility in Federated
Learning
- Authors: Kangkang Sun, Xiaojin Zhang, Xi Lin, Gaolei Li, Jing Wang, and Jianhua
Li
- Abstract summary: Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm.
We propose a privacy-protection fairness FL method to protect the privacy of the client model.
We conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.
- Score: 10.473137837891162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a novel privacy-protection distributed machine
learning paradigm that guarantees user privacy and prevents the risk of data
leakage due to the advantage of the client's local training. Researchers have
struggled to design fair FL systems that ensure fairness of results. However,
the interplay between fairness and privacy has been less studied. Increasing
the fairness of FL systems can have an impact on user privacy, while an
increase in user privacy can affect fairness. In this work, on the client side,
we use fairness metrics, such as Demographic Parity (DemP), Equalized Odds
(EOs), and Disparate Impact (DI), to construct the local fair model. To protect
the privacy of the client model, we propose a privacy-protection fairness FL
method. The results show that the accuracy of the fair model with privacy
increases because privacy breaks the constraints of the fairness metrics. In
our experiments, we conclude the relationship between privacy, fairness and
utility, and there is a tradeoff between these.
Related papers
- PFGuard: A Generative Framework with Privacy and Fairness Safeguards [14.504462873398461]
PFGuard is a generative framework with privacy and fairness safeguards.
It balances privacy-fairness conflicts between fair and private training stages.
Experiments show that PFGuard successfully generates synthetic data on high-dimensional data.
arXiv Detail & Related papers (2024-10-03T06:37:16Z) - FedFDP: Fairness-Aware Federated Learning with Differential Privacy [21.55903748640851]
Federated learning (FL) is a new machine learning paradigm to overcome the challenge of data silos.
We first propose a fairness-aware federated learning algorithm, termed FedFair.
We then introduce differential privacy protection to form the FedFDP algorithm to address the trade-offs among fairness, privacy protection, and model performance.
arXiv Detail & Related papers (2024-02-25T08:35:21Z) - Privacy and Fairness in Federated Learning: on the Perspective of
Trade-off [58.204074436129716]
Federated learning (FL) has been a hot topic in recent years.
As two crucial ethical notions, the interactions between privacy and fairness are comparatively less studied.
arXiv Detail & Related papers (2023-06-25T04:38:19Z) - Differentially Private Wireless Federated Learning Using Orthogonal
Sequences [56.52483669820023]
We propose a privacy-preserving uplink over-the-air computation (AirComp) method, termed FLORAS.
We prove that FLORAS offers both item-level and client-level differential privacy guarantees.
A new FL convergence bound is derived which, combined with the privacy guarantees, allows for a smooth tradeoff between the achieved convergence rate and differential privacy levels.
arXiv Detail & Related papers (2023-06-14T06:35:10Z) - Fair Differentially Private Federated Learning Framework [0.0]
Federated learning (FL) is a distributed machine learning strategy that enables participants to collaborate and train a shared model without sharing their individual datasets.
Privacy and fairness are crucial considerations in FL.
This paper presents a framework that addresses the challenges of generating a fair global model without validation data and creating a globally private differential model.
arXiv Detail & Related papers (2023-05-23T09:58:48Z) - Unraveling the Connections between Privacy and Certified Robustness in
Federated Learning Against Poisoning Attacks [68.20436971825941]
Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users.
Several studies have shown that FL is vulnerable to poisoning attacks.
To protect the privacy of local users, FL is usually trained in a differentially private way.
arXiv Detail & Related papers (2022-09-08T21:01:42Z) - "You Can't Fix What You Can't Measure": Privately Measuring Demographic
Performance Disparities in Federated Learning [78.70083858195906]
We propose differentially private mechanisms to measure differences in performance across groups while protecting the privacy of group membership.
Our results show that, contrary to what prior work suggested, protecting privacy is not necessarily in conflict with identifying performance disparities of federated models.
arXiv Detail & Related papers (2022-06-24T09:46:43Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z) - PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning [12.767527195281042]
Group fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires using the sensitive attribute values of all clients.
We show that this conflict between fairness and privacy in FL can be resolved by combining FL with Secure Multiparty Computation (MPC) and Differential Privacy (DP)
In doing so, we propose a method for training group-fair ML models in cross-device FL under complete and formal privacy guarantees.
arXiv Detail & Related papers (2022-05-23T19:26:12Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.