Minimax Demographic Group Fairness in Federated Learning
- URL: http://arxiv.org/abs/2201.08304v1
- Date: Thu, 20 Jan 2022 17:13:54 GMT
- Title: Minimax Demographic Group Fairness in Federated Learning
- Authors: Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro,
Miguel Rodrigues
- Abstract summary: Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.
We study minimax group fairness in federated learning scenarios where different participating entities may only have access to a subset of the population groups during the training phase.
We experimentally compare the proposed approach against other state-of-the-art methods in terms of group fairness in various federated learning setups.
- Score: 23.1988909029387
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is an increasingly popular paradigm that enables a large
number of entities to collaboratively learn better models. In this work, we
study minimax group fairness in federated learning scenarios where different
participating entities may only have access to a subset of the population
groups during the training phase. We formally analyze how our proposed group
fairness objective differs from existing federated learning fairness criteria
that impose similar performance across participants instead of demographic
groups. We provide an optimization algorithm -- FedMinMax -- for solving the
proposed problem that provably enjoys the performance guarantees of centralized
learning algorithms. We experimentally compare the proposed approach against
other state-of-the-art methods in terms of group fairness in various federated
learning setups, showing that our approach exhibits competitive or superior
performance.
Related papers
- Multi-Agent Reinforcement Learning from Human Feedback: Data Coverage and Algorithmic Techniques [65.55451717632317]
We study Multi-Agent Reinforcement Learning from Human Feedback (MARLHF), exploring both theoretical foundations and empirical validations.
We define the task as identifying Nash equilibrium from a preference-only offline dataset in general-sum games.
Our findings underscore the multifaceted approach required for MARLHF, paving the way for effective preference-based multi-agent systems.
arXiv Detail & Related papers (2024-09-01T13:14:41Z) - Outlier-Robust Group Inference via Gradient Space Clustering [50.87474101594732]
Existing methods can improve the worst-group performance, but they require group annotations, which are often expensive and sometimes infeasible to obtain.
We address the problem of learning group annotations in the presence of outliers by clustering the data in the space of gradients of the model parameters.
We show that data in the gradient space has a simpler structure while preserving information about minority groups and outliers, making it suitable for standard clustering methods like DBSCAN.
arXiv Detail & Related papers (2022-10-13T06:04:43Z) - Fair Federated Learning via Bounded Group Loss [37.72259706322158]
We propose a general framework for provably fair federated learning.
We extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness.
We provide convergence guarantees for the method as well as fairness guarantees for the resulting solution.
arXiv Detail & Related papers (2022-03-18T23:11:54Z) - Unified Group Fairness on Federated Learning [22.143427873780404]
Federated learning (FL) has emerged as an important machine learning paradigm where a global model is trained based on private data from distributed clients.
Recent researches focus on achieving fairness among clients, but they ignore the fairness towards different groups formed by sensitive attribute(s) (e.g., gender and/or race)
We propose a novel FL algorithm, named Group Distributionally Robust Federated Averaging (G-DRFA), which mitigates the distribution shift across groups with theoretical analysis of convergence rate.
arXiv Detail & Related papers (2021-11-09T08:21:38Z) - Focus on the Common Good: Group Distributional Robustness Follows [47.62596240492509]
This paper proposes a new and simple algorithm that explicitly encourages learning of features that are shared across various groups.
While Group-DRO focuses on groups with worst regularized loss, focusing instead, on groups that enable better performance even on other groups, could lead to learning of shared/common features.
arXiv Detail & Related papers (2021-10-06T09:47:41Z) - Federating for Learning Group Fair Models [19.99325961328706]
Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.
We study minmax group fairness in paradigms where different participating entities may only have access to a subset of the population groups during the training phase.
arXiv Detail & Related papers (2021-10-05T12:42:43Z) - FairFed: Enabling Group Fairness in Federated Learning [22.913999279079878]
Federated learning has been viewed as a promising solution for learning machine learning models among multiple parties.
We propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method.
Our proposed method outperforms the state-of-the-art fair federated learning frameworks under a high heterogeneous sensitive attribute distribution.
arXiv Detail & Related papers (2021-10-02T17:55:20Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Practical One-Shot Federated Learning for Cross-Silo Setting [114.76232507580067]
One-shot federated learning is a promising approach to make federated learning applicable in cross-silo setting.
We propose a practical one-shot federated learning algorithm named FedKT.
By utilizing the knowledge transfer technique, FedKT can be applied to any classification models and can flexibly achieve differential privacy guarantees.
arXiv Detail & Related papers (2020-10-02T14:09:10Z) - Collaborative Fairness in Federated Learning [24.7378023761443]
We propose a novel Collaborative Fair Federated Learning (CFFL) framework for deep learning.
CFFL enforces participants to converge to different models, thus achieving fairness without compromising predictive performance.
Experiments on benchmark datasets demonstrate that CFFL achieves high fairness and delivers comparable accuracy to the Distributed framework.
arXiv Detail & Related papers (2020-08-27T14:39:09Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.