Fairness-aware Agnostic Federated Learning
- URL: http://arxiv.org/abs/2010.05057v1
- Date: Sat, 10 Oct 2020 17:58:20 GMT
- Title: Fairness-aware Agnostic Federated Learning
- Authors: Wei Du, Depeng Xu, Xintao Wu and Hanghang Tong
- Abstract summary: We develop a fairness-aware agnostic federated learning framework (AgnosticFair) to deal with the challenge of unknown testing distribution.
We use kernel reweighing functions to assign a reweighing value on each training sample in both loss function and fairness constraint.
Built model can be directly applied to local sites as it guarantees fairness on local data distributions.
- Score: 47.26747955026486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is an emerging framework that builds centralized machine
learning models with training data distributed across multiple devices. Most of
the previous works about federated learning focus on the privacy protection and
communication cost reduction. However, how to achieve fairness in federated
learning is under-explored and challenging especially when testing data
distribution is different from training distribution or even unknown.
Introducing simple fairness constraints on the centralized model cannot achieve
model fairness on unknown testing data. In this paper, we develop a
fairness-aware agnostic federated learning framework (AgnosticFair) to deal
with the challenge of unknown testing distribution. We use kernel reweighing
functions to assign a reweighing value on each training sample in both loss
function and fairness constraint. Therefore, the centralized model built from
AgnosticFair can achieve high accuracy and fairness guarantee on unknown
testing data. Moreover, the built model can be directly applied to local sites
as it guarantees fairness on local data distributions. To our best knowledge,
this is the first work to achieve fairness in federated learning. Experimental
results on two real datasets demonstrate the effectiveness in terms of both
utility and fairness under data shift scenarios.
Related papers
- Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk
Minimization Framework [12.734559823650887]
In the presence of distribution shifts, fair machine learning models may behave unfairly on test data.
Existing algorithms require full access to data and cannot be used when small batches are used.
This paper proposes the first distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph.
arXiv Detail & Related papers (2023-09-20T23:25:28Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z) - Fair and efficient contribution valuation for vertical federated
learning [49.50442779626123]
Federated learning is a popular technology for training machine learning models on distributed data sources without sharing data.
The Shapley value (SV) is a provably fair contribution valuation metric originated from cooperative game theory.
We propose a contribution valuation metric called vertical federated Shapley value (VerFedSV) based on SV.
arXiv Detail & Related papers (2022-01-07T19:57:15Z) - Improving Fairness via Federated Learning [14.231231094281362]
We propose a new theoretical framework, with which we analyze the value of federated learning in improving fairness.
We then theoretically and empirically show that the performance tradeoff of FedAvg-based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data.
To resolve this, we propose FedFB, a private fair learning algorithm on decentralized data with a modified FedAvg protocol.
arXiv Detail & Related papers (2021-10-29T05:25:44Z) - Enforcing fairness in private federated learning via the modified method
of differential multipliers [1.3381749415517021]
Federated learning with differential privacy, or private federated learning, provides a strategy to train machine learning models while respecting users' privacy.
This paper introduces an algorithm to enforce group fairness in private federated learning, where users' data does not leave their devices.
arXiv Detail & Related papers (2021-09-17T15:28:47Z) - Blockchain-based Trustworthy Federated Learning Architecture [16.062545221270337]
We present a blockchain-based trustworthy federated learning architecture.
We first design a smart contract-based data-model provenance registry to enable accountability.
We also propose a weighted fair data sampler algorithm to enhance fairness in training data.
arXiv Detail & Related papers (2021-08-16T06:13:58Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Fairness Constraints in Semi-supervised Learning [56.48626493765908]
We develop a framework for fair semi-supervised learning, which is formulated as an optimization problem.
We theoretically analyze the source of discrimination in semi-supervised learning via bias, variance and noise decomposition.
Our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
arXiv Detail & Related papers (2020-09-14T04:25:59Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - FR-Train: A Mutual Information-Based Approach to Fair and Robust
Training [33.385118640843416]
We propose FR-Train, which holistically performs fair and robust model training.
In our experiments, FR-Train shows almost no decrease in fairness and accuracy in the presence of data poisoning.
arXiv Detail & Related papers (2020-02-24T13:37:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.