Fair Federated Learning via Bounded Group Loss
- URL: http://arxiv.org/abs/2203.10190v3
- Date: Thu, 13 Oct 2022 03:57:27 GMT
- Title: Fair Federated Learning via Bounded Group Loss
- Authors: Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith
- Abstract summary: We propose a general framework for provably fair federated learning.
We extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness.
We provide convergence guarantees for the method as well as fairness guarantees for the resulting solution.
- Score: 37.72259706322158
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fair prediction across protected groups is an important constraint for many
federated learning applications. However, prior work studying group fair
federated learning lacks formal convergence or fairness guarantees. In this
work we propose a general framework for provably fair federated learning. In
particular, we explore and extend the notion of Bounded Group Loss as a
theoretically-grounded approach for group fairness. Using this setup, we
propose a scalable federated optimization method that optimizes the empirical
risk under a number of group fairness constraints. We provide convergence
guarantees for the method as well as fairness guarantees for the resulting
solution. Empirically, we evaluate our method across common benchmarks from
fair ML and federated learning, showing that it can provide both fairer and
more accurate predictions than baseline approaches.
Related papers
- Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Distribution-Free Fair Federated Learning with Small Samples [54.63321245634712]
FedFaiREE is a post-processing algorithm developed specifically for distribution-free fair learning in decentralized settings with small samples.
We provide rigorous theoretical guarantees for both fairness and accuracy, and our experimental results further provide robust empirical validation for our proposed method.
arXiv Detail & Related papers (2024-02-25T17:37:53Z) - Federated Fairness without Access to Sensitive Groups [12.888927461513472]
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
We propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels.
arXiv Detail & Related papers (2024-02-22T19:24:59Z) - Relaxed Contrastive Learning for Federated Learning [48.96253206661268]
We propose a novel contrastive learning framework to address the challenges of data heterogeneity in federated learning.
Our framework outperforms all existing federated learning approaches by huge margins on the standard benchmarks.
arXiv Detail & Related papers (2024-01-10T04:55:24Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - FaiREE: Fair Classification with Finite-Sample and Distribution-Free
Guarantee [40.10641140860374]
FaiREE is a fair classification algorithm that can satisfy group fairness constraints with finite-sample and distribution-free theoretical guarantees.
FaiREE is shown to have favorable performance over state-of-the-art algorithms.
arXiv Detail & Related papers (2022-11-28T05:16:20Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Minimax Demographic Group Fairness in Federated Learning [23.1988909029387]
Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.
We study minimax group fairness in federated learning scenarios where different participating entities may only have access to a subset of the population groups during the training phase.
We experimentally compare the proposed approach against other state-of-the-art methods in terms of group fairness in various federated learning setups.
arXiv Detail & Related papers (2022-01-20T17:13:54Z) - Federating for Learning Group Fair Models [19.99325961328706]
Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.
We study minmax group fairness in paradigms where different participating entities may only have access to a subset of the population groups during the training phase.
arXiv Detail & Related papers (2021-10-05T12:42:43Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.