Enforcing fairness in private federated learning via the modified method
of differential multipliers
- URL: http://arxiv.org/abs/2109.08604v1
- Date: Fri, 17 Sep 2021 15:28:47 GMT
- Title: Enforcing fairness in private federated learning via the modified method
of differential multipliers
- Authors: Borja Rodr\'iguez-G\'alvez and Filip Granqvist and Rogier van Dalen
and Matt Seigel
- Abstract summary: Federated learning with differential privacy, or private federated learning, provides a strategy to train machine learning models while respecting users' privacy.
This paper introduces an algorithm to enforce group fairness in private federated learning, where users' data does not leave their devices.
- Score: 1.3381749415517021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning with differential privacy, or private federated learning,
provides a strategy to train machine learning models while respecting users'
privacy. However, differential privacy can disproportionately degrade the
performance of the models on under-represented groups, as these parts of the
distribution are difficult to learn in the presence of noise. Existing
approaches for enforcing fairness in machine learning models have considered
the centralized setting, in which the algorithm has access to the users' data.
This paper introduces an algorithm to enforce group fairness in private
federated learning, where users' data does not leave their devices. First, the
paper extends the modified method of differential multipliers to empirical risk
minimization with fairness constraints, thus providing an algorithm to enforce
fairness in the central setting. Then, this algorithm is extended to the
private federated learning setting. The proposed algorithm, FPFL, is tested on
a federated version of the Adult dataset and an "unfair" version of the FEMNIST
dataset. The experiments on these datasets show how private federated learning
accentuates unfairness in the trained models, and how FPFL is able to mitigate
such unfairness.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - FedFDP: Fairness-Aware Federated Learning with Differential Privacy [21.55903748640851]
Federated learning (FL) is a new machine learning paradigm to overcome the challenge of data silos.
We first propose a fairness-aware federated learning algorithm, termed FedFair.
We then introduce differential privacy protection to form the FedFDP algorithm to address the trade-offs among fairness, privacy protection, and model performance.
arXiv Detail & Related papers (2024-02-25T08:35:21Z) - Can Public Large Language Models Help Private Cross-device Federated Learning? [58.05449579773249]
We study (differentially) private federated learning (FL) of language models.
Public data has been used to improve privacy-utility trade-offs for both large and small language models.
We propose a novel distribution matching algorithm with theoretical grounding to sample public data close to private data distribution.
arXiv Detail & Related papers (2023-05-20T07:55:58Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Improving Fairness via Federated Learning [14.231231094281362]
We propose a new theoretical framework, with which we analyze the value of federated learning in improving fairness.
We then theoretically and empirically show that the performance tradeoff of FedAvg-based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data.
To resolve this, we propose FedFB, a private fair learning algorithm on decentralized data with a modified FedAvg protocol.
arXiv Detail & Related papers (2021-10-29T05:25:44Z) - FairFed: Enabling Group Fairness in Federated Learning [22.913999279079878]
Federated learning has been viewed as a promising solution for learning machine learning models among multiple parties.
We propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method.
Our proposed method outperforms the state-of-the-art fair federated learning frameworks under a high heterogeneous sensitive attribute distribution.
arXiv Detail & Related papers (2021-10-02T17:55:20Z) - On the Privacy Risks of Algorithmic Fairness [9.429448411561541]
We study the privacy risks of group fairness through the lens of membership inference attacks.
We show that fairness comes at the cost of privacy, and this cost is not distributed equally.
arXiv Detail & Related papers (2020-11-07T09:15:31Z) - Fairness-aware Agnostic Federated Learning [47.26747955026486]
We develop a fairness-aware agnostic federated learning framework (AgnosticFair) to deal with the challenge of unknown testing distribution.
We use kernel reweighing functions to assign a reweighing value on each training sample in both loss function and fairness constraint.
Built model can be directly applied to local sites as it guarantees fairness on local data distributions.
arXiv Detail & Related papers (2020-10-10T17:58:20Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.