Improving Fairness via Federated Learning
- URL: http://arxiv.org/abs/2110.15545v1
- Date: Fri, 29 Oct 2021 05:25:44 GMT
- Title: Improving Fairness via Federated Learning
- Authors: Yuchen Zeng, Hongxu Chen, Kangwook Lee
- Abstract summary: We propose a new theoretical framework, with which we analyze the value of federated learning in improving fairness.
We then theoretically and empirically show that the performance tradeoff of FedAvg-based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data.
To resolve this, we propose FedFB, a private fair learning algorithm on decentralized data with a modified FedAvg protocol.
- Score: 14.231231094281362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, lots of algorithms have been proposed for learning a fair
classifier from centralized data. However, how to privately train a fair
classifier on decentralized data has not been fully studied yet. In this work,
we first propose a new theoretical framework, with which we analyze the value
of federated learning in improving fairness. Our analysis reveals that
federated learning can strictly boost model fairness compared with all
non-federated algorithms. We then theoretically and empirically show that the
performance tradeoff of FedAvg-based fair learning algorithms is strictly worse
than that of a fair classifier trained on centralized data. To resolve this, we
propose FedFB, a private fair learning algorithm on decentralized data with a
modified FedAvg protocol. Our extensive experimental results show that FedFB
significantly outperforms existing approaches, sometimes achieving a similar
tradeoff as the one trained on centralized data.
Related papers
- Distribution-Free Fair Federated Learning with Small Samples [54.63321245634712]
FedFaiREE is a post-processing algorithm developed specifically for distribution-free fair learning in decentralized settings with small samples.
We provide rigorous theoretical guarantees for both fairness and accuracy, and our experimental results further provide robust empirical validation for our proposed method.
arXiv Detail & Related papers (2024-02-25T17:37:53Z) - FAIR-FATE: Fair Federated Learning with Momentum [0.41998444721319217]
We propose a novel FAIR FederATEd Learning algorithm that aims to achieve group fairness while maintaining high utility.
To the best of our knowledge, this is the first approach in machine learning that aims to achieve fairness using a fair Momentum estimate.
Experimental results on real-world datasets demonstrate that FAIR-FATE outperforms state-of-the-art fair Federated Learning algorithms.
arXiv Detail & Related papers (2022-09-27T20:33:38Z) - Enforcing fairness in private federated learning via the modified method
of differential multipliers [1.3381749415517021]
Federated learning with differential privacy, or private federated learning, provides a strategy to train machine learning models while respecting users' privacy.
This paper introduces an algorithm to enforce group fairness in private federated learning, where users' data does not leave their devices.
arXiv Detail & Related papers (2021-09-17T15:28:47Z) - Blockchain-based Trustworthy Federated Learning Architecture [16.062545221270337]
We present a blockchain-based trustworthy federated learning architecture.
We first design a smart contract-based data-model provenance registry to enable accountability.
We also propose a weighted fair data sampler algorithm to enhance fairness in training data.
arXiv Detail & Related papers (2021-08-16T06:13:58Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Fairness and Accuracy in Federated Learning [17.218814060589956]
This paper proposes an algorithm to achieve more fairness and accuracy in federated learning (FedFa)
It introduces an optimization scheme that employs a double momentum gradient, thereby accelerating the convergence rate of the model.
An appropriate weight selection algorithm that combines the information quantity of training accuracy and training frequency to measure the weights is proposed.
arXiv Detail & Related papers (2020-12-18T06:28:37Z) - Fairness-aware Agnostic Federated Learning [47.26747955026486]
We develop a fairness-aware agnostic federated learning framework (AgnosticFair) to deal with the challenge of unknown testing distribution.
We use kernel reweighing functions to assign a reweighing value on each training sample in both loss function and fairness constraint.
Built model can be directly applied to local sites as it guarantees fairness on local data distributions.
arXiv Detail & Related papers (2020-10-10T17:58:20Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Fairness Constraints in Semi-supervised Learning [56.48626493765908]
We develop a framework for fair semi-supervised learning, which is formulated as an optimization problem.
We theoretically analyze the source of discrimination in semi-supervised learning via bias, variance and noise decomposition.
Our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
arXiv Detail & Related papers (2020-09-14T04:25:59Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.