Personalized Cross-Silo Federated Learning on Non-IID Data
- URL: http://arxiv.org/abs/2007.03797v5
- Date: Tue, 14 Dec 2021 00:08:08 GMT
- Title: Personalized Cross-Silo Federated Learning on Non-IID Data
- Authors: Yutao Huang, Lingyang Chu, Zirui Zhou, Lanjun Wang, Jiangchuan Liu,
Jian Pei, Yong Zhang
- Abstract summary: Non-IID data present a tough challenge for federated learning.
We propose a novel idea of pairwise collaborations between clients with similar data.
- Score: 62.68467223450439
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-IID data present a tough challenge for federated learning. In this paper,
we explore a novel idea of facilitating pairwise collaborations between clients
with similar data. We propose FedAMP, a new method employing federated
attentive message passing to facilitate similar clients to collaborate more. We
establish the convergence of FedAMP for both convex and non-convex models, and
propose a heuristic method to further improve the performance of FedAMP when
clients adopt deep neural networks as personalized models. Our extensive
experiments on benchmark data sets demonstrate the superior performance of the
proposed methods.
Related papers
- Federated Learning Can Find Friends That Are Advantageous [14.993730469216546]
In Federated Learning (FL), the distributed nature and heterogeneity of client data present both opportunities and challenges.
We introduce a novel algorithm that assigns adaptive aggregation weights to clients participating in FL training, identifying those with data distributions most conducive to a specific learning objective.
arXiv Detail & Related papers (2024-02-07T17:46:37Z) - Personalized Federated Learning with Attention-based Client Selection [57.71009302168411]
We propose FedACS, a new PFL algorithm with an Attention-based Client Selection mechanism.
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions.
Experiments on CIFAR10 and FMNIST validate FedACS's superiority.
arXiv Detail & Related papers (2023-12-23T03:31:46Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - UPFL: Unsupervised Personalized Federated Learning towards New Clients [13.98952154869707]
In this paper, we address a relatively unexplored problem in federated learning.
When a federated model has been trained and deployed, and an unlabeled new client joins, providing a personalized model for the new client becomes a highly challenging task.
We extend the adaptive risk minimization technique into the unsupervised personalized federated learning setting and propose our method, FedTTA.
arXiv Detail & Related papers (2023-07-29T14:30:11Z) - FedDRL: Deep Reinforcement Learning-based Adaptive Aggregation for
Non-IID Data in Federated Learning [4.02923738318937]
Uneven distribution of local data across different edge devices (clients) results in slow model training and accuracy reduction in federated learning.
This work introduces a novel non-IID type encountered in real-world datasets, namely cluster-skew.
We propose FedDRL, a novel FL model that employs deep reinforcement learning to adaptively determine each client's impact factor.
arXiv Detail & Related papers (2022-08-04T04:24:16Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Ensemble Federated Adversarial Training with Non-IID data [1.5878082907673585]
Adversarial samples can confuse and cheat the client models to achieve malicious purposes.
We introduce a novel Ensemble Federated Adversarial Training Method, termed as EFAT.
Our proposed method achieves promising results compared with solely combining federated learning with adversarial approaches.
arXiv Detail & Related papers (2021-10-26T03:55:20Z) - WAFFLE: Weighted Averaging for Personalized Federated Learning [38.241216472571786]
We introduce WAFFLE, a personalized collaborative machine learning algorithm based on SCAFFOLD.
WAFFLE uses the Euclidean distance between clients' updates to weigh their individual contributions.
Our experiments demonstrate the effectiveness of WAFFLE compared with other methods.
arXiv Detail & Related papers (2021-10-13T18:40:54Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.