Federated Fairness without Access to Sensitive Groups
- URL: http://arxiv.org/abs/2402.14929v1
- Date: Thu, 22 Feb 2024 19:24:59 GMT
- Title: Federated Fairness without Access to Sensitive Groups
- Authors: Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro,
Miguel Rodrigues
- Abstract summary: Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
We propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels.
- Score: 12.888927461513472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current approaches to group fairness in federated learning assume the
existence of predefined and labeled sensitive groups during training. However,
due to factors ranging from emerging regulations to dynamics and
location-dependency of protected groups, this assumption may be unsuitable in
many real-world scenarios. In this work, we propose a new approach to guarantee
group fairness that does not rely on any predefined definition of sensitive
groups or additional labels. Our objective allows the federation to learn a
Pareto efficient global model ensuring worst-case group fairness and it
enables, via a single hyper-parameter, trade-offs between fairness and utility,
subject only to a group size constraint. This implies that any sufficiently
large subset of the population is guaranteed to receive at least a minimum
level of utility performance from the model. The proposed objective encompasses
existing approaches as special cases, such as empirical risk minimization and
subgroup robustness objectives from centralized machine learning. We provide an
algorithm to solve this problem in federation that enjoys convergence and
excess risk guarantees. Our empirical results indicate that the proposed
approach can effectively improve the worst-performing group that may be present
without unnecessarily hurting the average performance, exhibits superior or
comparable performance to relevant baselines, and achieves a large set of
solutions with different fairness-utility trade-offs.
Related papers
- Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Distribution-Free Fair Federated Learning with Small Samples [54.63321245634712]
FedFaiREE is a post-processing algorithm developed specifically for distribution-free fair learning in decentralized settings with small samples.
We provide rigorous theoretical guarantees for both fairness and accuracy, and our experimental results further provide robust empirical validation for our proposed method.
arXiv Detail & Related papers (2024-02-25T17:37:53Z) - Mitigating Group Bias in Federated Learning for Heterogeneous Devices [1.181206257787103]
Federated Learning is emerging as a privacy-preserving model training approach in distributed edge applications.
Our work proposes a group-fair FL framework that minimizes group-bias while preserving privacy and without resource utilization overhead.
arXiv Detail & Related papers (2023-09-13T16:53:48Z) - Modeling the Q-Diversity in a Min-max Play Game for Robust Optimization [61.39201891894024]
Group distributionally robust optimization (group DRO) can minimize the worst-case loss over pre-defined groups.
We reformulate the group DRO framework by proposing Q-Diversity.
Characterized by an interactive training mode, Q-Diversity relaxes the group identification from annotation into direct parameterization.
arXiv Detail & Related papers (2023-05-20T07:02:27Z) - Distributionally Robust Optimization with Probabilistic Group [24.22720998340643]
We propose a novel framework PG-DRO for distributionally robust optimization.
Key to our framework is soft group membership instead of hard group annotations.
Our framework accommodates samples with group membership ambiguity, offering stronger flexibility and generality than the prior art.
arXiv Detail & Related papers (2023-03-10T09:31:44Z) - Fair Federated Learning via Bounded Group Loss [37.72259706322158]
We propose a general framework for provably fair federated learning.
We extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness.
We provide convergence guarantees for the method as well as fairness guarantees for the resulting solution.
arXiv Detail & Related papers (2022-03-18T23:11:54Z) - Unified Group Fairness on Federated Learning [22.143427873780404]
Federated learning (FL) has emerged as an important machine learning paradigm where a global model is trained based on private data from distributed clients.
Recent researches focus on achieving fairness among clients, but they ignore the fairness towards different groups formed by sensitive attribute(s) (e.g., gender and/or race)
We propose a novel FL algorithm, named Group Distributionally Robust Federated Averaging (G-DRFA), which mitigates the distribution shift across groups with theoretical analysis of convergence rate.
arXiv Detail & Related papers (2021-11-09T08:21:38Z) - Focus on the Common Good: Group Distributional Robustness Follows [47.62596240492509]
This paper proposes a new and simple algorithm that explicitly encourages learning of features that are shared across various groups.
While Group-DRO focuses on groups with worst regularized loss, focusing instead, on groups that enable better performance even on other groups, could lead to learning of shared/common features.
arXiv Detail & Related papers (2021-10-06T09:47:41Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.