Fair Federated Medical Image Segmentation via Client Contribution
Estimation
- URL: http://arxiv.org/abs/2303.16520v1
- Date: Wed, 29 Mar 2023 08:21:54 GMT
- Title: Fair Federated Medical Image Segmentation via Client Contribution
Estimation
- Authors: Meirui Jiang, Holger R Roth, Wenqi Li, Dong Yang, Can Zhao, Vishwesh
Nath, Daguang Xu, Qi Dou, Ziyue Xu
- Abstract summary: How to ensure fairness is an important topic in federated learning (FL)
Recent studies have investigated how to reward clients based on their contribution and how to achieve uniformity of performance across clients.
We propose a novel method to optimize both types of fairness simultaneously.
- Score: 24.148002258279632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How to ensure fairness is an important topic in federated learning (FL).
Recent studies have investigated how to reward clients based on their
contribution (collaboration fairness), and how to achieve uniformity of
performance across clients (performance fairness). Despite achieving progress
on either one, we argue that it is critical to consider them together, in order
to engage and motivate more diverse clients joining FL to derive a high-quality
global model. In this work, we propose a novel method to optimize both types of
fairness simultaneously. Specifically, we propose to estimate client
contribution in gradient and data space. In gradient space, we monitor the
gradient direction differences of each client with respect to others. And in
data space, we measure the prediction error on client data using an auxiliary
model. Based on this contribution estimation, we propose a FL method, federated
training via contribution estimation (FedCE), i.e., using estimation as global
model aggregation weights. We have theoretically analyzed our method and
empirically evaluated it on two real-world medical datasets. The effectiveness
of our approach has been validated with significant performance improvements,
better collaboration fairness, better performance fairness, and comprehensive
analytical studies.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - Federated Learning Can Find Friends That Are Advantageous [14.993730469216546]
In Federated Learning (FL), the distributed nature and heterogeneity of client data present both opportunities and challenges.
We introduce a novel algorithm that assigns adaptive aggregation weights to clients participating in FL training, identifying those with data distributions most conducive to a specific learning objective.
arXiv Detail & Related papers (2024-02-07T17:46:37Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - A Coalition Formation Game Approach for Personalized Federated Learning [12.784305390534888]
We propose a novel personalized algorithm: pFedSV, which can 1. identify each client's optimal collaborator coalition and 2. perform personalized model aggregation based on SV.
The results show that pFedSV can achieve superior personalized accuracy for each client, compared to the state-of-the-art benchmarks.
arXiv Detail & Related papers (2022-02-05T07:16:44Z) - WAFFLE: Weighted Averaging for Personalized Federated Learning [38.241216472571786]
We introduce WAFFLE, a personalized collaborative machine learning algorithm based on SCAFFOLD.
WAFFLE uses the Euclidean distance between clients' updates to weigh their individual contributions.
Our experiments demonstrate the effectiveness of WAFFLE compared with other methods.
arXiv Detail & Related papers (2021-10-13T18:40:54Z) - Fair and Consistent Federated Learning [48.19977689926562]
Federated learning (FL) has gain growing interests for its capability of learning from distributed data sources collectively.
We propose an FL framework to jointly consider performance consistency and algorithmic fairness across different local clients.
arXiv Detail & Related papers (2021-08-19T01:56:08Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.