Fairness in Federated Learning via Core-Stability
- URL: http://arxiv.org/abs/2211.02091v1
- Date: Thu, 3 Nov 2022 18:41:11 GMT
- Title: Fairness in Federated Learning via Core-Stability
- Authors: Bhaskar Ray Chaudhury, Linyi Li, Mintong Kang, Bo Li, Ruta Mehta
- Abstract summary: Federated learning provides an effective paradigm to jointly optimize a model benefited from rich distributed data.
It is intuitively "unfair" for agents with data of high quality to sacrifice their performance due to other agents with low quality data.
We propose an efficient federated learning protocol CoreFed to optimize a core stable predictor.
- Score: 16.340526776021143
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning provides an effective paradigm to jointly optimize a model
benefited from rich distributed data while protecting data privacy.
Nonetheless, the heterogeneity nature of distributed data makes it challenging
to define and ensure fairness among local agents. For instance, it is
intuitively "unfair" for agents with data of high quality to sacrifice their
performance due to other agents with low quality data. Currently popular
egalitarian and weighted equity-based fairness measures suffer from the
aforementioned pitfall. In this work, we aim to formally represent this problem
and address these fairness issues using concepts from co-operative game theory
and social choice theory. We model the task of learning a shared predictor in
the federated setting as a fair public decision making problem, and then define
the notion of core-stable fairness: Given $N$ agents, there is no subset of
agents $S$ that can benefit significantly by forming a coalition among
themselves based on their utilities $U_N$ and $U_S$ (i.e., $\frac{|S|}{N} U_S
\geq U_N$). Core-stable predictors are robust to low quality local data from
some agents, and additionally they satisfy Proportionality and
Pareto-optimality, two well sought-after fairness and efficiency notions within
social choice. We then propose an efficient federated learning protocol CoreFed
to optimize a core stable predictor. CoreFed determines a core-stable predictor
when the loss functions of the agents are convex. CoreFed also determines
approximate core-stable predictors when the loss functions are not convex, like
smooth neural networks. We further show the existence of core-stable predictors
in more general settings using Kakutani's fixed point theorem. Finally, we
empirically validate our analysis on two real-world datasets, and we show that
CoreFed achieves higher core-stability fairness than FedAvg while having
similar accuracy.
Related papers
- Fair CoVariance Neural Networks [34.68621550644667]
We propose Fair coVariance Neural Networks (FVNNs), which perform graph convolutions on the covariance matrix for both fair and accurate predictions.
We prove that FVNNs are intrinsically fairer than analogous PCA approaches thanks to their stability in low sample regimes.
arXiv Detail & Related papers (2024-09-13T06:24:18Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Understanding Fairness Surrogate Functions in Algorithmic Fairness [21.555040357521907]
We show that there is a surrogate-fairness gap between the fairness definition and the fairness surrogate function.
We elaborate a novel and general algorithm called Balanced Surrogate, which iteratively reduces the gap to mitigate unfairness.
arXiv Detail & Related papers (2023-10-17T12:40:53Z) - RobustFair: Adversarial Evaluation through Fairness Confusion Directed
Gradient Search [8.278129731168127]
Deep neural networks (DNNs) often face challenges due to their vulnerability to various adversarial perturbations.
This paper introduces a novel approach, RobustFair, to evaluate the accurate fairness of DNNs when subjected to false or biased perturbations.
arXiv Detail & Related papers (2023-05-18T12:07:29Z) - Fairness through Aleatoric Uncertainty [18.95295731419523]
We introduce the idea of leveraging aleatoric uncertainty (e.g., data ambiguity) to improve the fairness-utility trade-off.
Our central hypothesis is that aleatoric uncertainty is a key factor for algorithmic fairness.
We then propose a principled model to improve fairness when aleatoric uncertainty is high and improve utility elsewhere.
arXiv Detail & Related papers (2023-04-07T13:50:57Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Measuring Model Fairness under Noisy Covariates: A Theoretical
Perspective [26.704446184314506]
We study the problem of measuring the fairness of a machine learning model under noisy information.
We present a theoretical analysis that aims to characterize weaker conditions under which accurate fairness evaluation is possible.
arXiv Detail & Related papers (2021-05-20T18:36:28Z) - Learning Strategies in Decentralized Matching Markets under Uncertain
Preferences [91.3755431537592]
We study the problem of decision-making in the setting of a scarcity of shared resources when the preferences of agents are unknown a priori.
Our approach is based on the representation of preferences in a reproducing kernel Hilbert space.
We derive optimal strategies that maximize agents' expected payoffs.
arXiv Detail & Related papers (2020-10-29T03:08:22Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.