Unified Group Fairness on Federated Learning
- URL: http://arxiv.org/abs/2111.04986v1
- Date: Tue, 9 Nov 2021 08:21:38 GMT
- Title: Unified Group Fairness on Federated Learning
- Authors: Fengda Zhang, Kun Kuang, Yuxuan Liu, Chao Wu, Fei Wu, Jiaxun Lu,
Yunfeng Shao, Jun Xiao
- Abstract summary: Federated learning (FL) has emerged as an important machine learning paradigm where a global model is trained based on private data from distributed clients.
Recent researches focus on achieving fairness among clients, but they ignore the fairness towards different groups formed by sensitive attribute(s) (e.g., gender and/or race)
We propose a novel FL algorithm, named Group Distributionally Robust Federated Averaging (G-DRFA), which mitigates the distribution shift across groups with theoretical analysis of convergence rate.
- Score: 22.143427873780404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has emerged as an important machine learning paradigm
where a global model is trained based on the private data from distributed
clients. However, most of existing FL algorithms cannot guarantee the
performance fairness towards different clients or different groups of samples
because of the distribution shift. Recent researches focus on achieving
fairness among clients, but they ignore the fairness towards different groups
formed by sensitive attribute(s) (e.g., gender and/or race), which is important
and practical in real applications. To bridge this gap, we formulate the goal
of unified group fairness on FL which is to learn a fair global model with
similar performance on different groups. To achieve the unified group fairness
for arbitrary sensitive attribute(s), we propose a novel FL algorithm, named
Group Distributionally Robust Federated Averaging (G-DRFA), which mitigates the
distribution shift across groups with theoretical analysis of convergence rate.
Specifically, we treat the performance of the federated global model at each
group as an objective and employ the distributionally robust techniques to
maximize the performance of the worst-performing group over an uncertainty set
by group reweighting. We validate the advantages of the G-DRFA algorithm with
various kinds of distribution shift settings in experiments, and the results
show that G-DRFA algorithm outperforms the existing fair federated learning
algorithms on unified group fairness.
Related papers
- Enhancing Group Fairness in Federated Learning through Personalization [15.367801388932145]
We show that personalization can lead to improved (local) fairness as an unintended benefit.
We propose two new fairness-aware clustering algorithms, Fair-FCA and Fair-FL+HC.
arXiv Detail & Related papers (2024-07-27T19:55:18Z) - Federated Fairness without Access to Sensitive Groups [12.888927461513472]
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
We propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels.
arXiv Detail & Related papers (2024-02-22T19:24:59Z) - Dynamic Fair Federated Learning Based on Reinforcement Learning [19.033986978896074]
Federated learning enables a collaborative training and optimization of global models among a group of devices without sharing local data samples.
We propose a dynamic q fairness federated learning algorithm with reinforcement learning, called DQFFL.
Our DQFFL outperforms the state-of-the-art methods in terms of overall performance, fairness and convergence speed.
arXiv Detail & Related papers (2023-11-02T03:05:40Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Re-weighting Based Group Fairness Regularization via Classwise Robust
Optimization [30.089819400033985]
We propose a principled method, dubbed as ours, which unifies the two learning schemes by incorporating a well-justified group fairness metric into the training objective.
We develop an iterative optimization algorithm that minimizes the resulting objective by automatically producing the correct re-weights for each group.
Our experiments show that FairDRO is scalable and easily adaptable to diverse applications.
arXiv Detail & Related papers (2023-03-01T12:00:37Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Heterogeneous Federated Learning via Grouped Sequential-to-Parallel
Training [60.892342868936865]
Federated learning (FL) is a rapidly growing privacy-preserving collaborative machine learning paradigm.
We propose a data heterogeneous-robust FL approach, FedGSP, to address this challenge.
We show that FedGSP improves the accuracy by 3.7% on average compared with seven state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-31T03:15:28Z) - Federating for Learning Group Fair Models [19.99325961328706]
Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.
We study minmax group fairness in paradigms where different participating entities may only have access to a subset of the population groups during the training phase.
arXiv Detail & Related papers (2021-10-05T12:42:43Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.