FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation
- URL: http://arxiv.org/abs/2212.07224v1
- Date: Wed, 14 Dec 2022 13:57:01 GMT
- Title: FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation
- Authors: Ziqing Fan, Yanfeng Wang, Jiangchao Yao, Lingjuan Lyu, Ya Zhang and Qi
Tian
- Abstract summary: We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
- Score: 95.85026305874824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The statistical heterogeneity of the non-independent and identically
distributed (non-IID) data in local clients significantly limits the
performance of federated learning. Previous attempts like FedProx, SCAFFOLD,
MOON, FedNova and FedDyn resort to an optimization perspective, which requires
an auxiliary term or re-weights local updates to calibrate the learning bias or
the objective inconsistency. However, in addition to previous explorations for
improvement in federated averaging, our analysis shows that another critical
bottleneck is the poorer optima of client models in more heterogeneous
conditions. We thus introduce a data-driven approach called FedSkip to improve
the client optima by periodically skipping federated averaging and scattering
local models to the cross devices. We provide theoretical analysis of the
possible benefit from FedSkip and conduct extensive experiments on a range of
datasets to demonstrate that FedSkip achieves much higher accuracy, better
aggregation efficiency and competing communication efficiency. Source code is
available at: https://github.com/MediaBrain-SJTU/FedSkip.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - Leveraging Function Space Aggregation for Federated Learning at Scale [20.866482460590973]
We propose a new algorithm, FedFish, that aggregates local approximations to the functions learned by clients.
We evaluate FedFish on realistic, large-scale cross-device benchmarks.
arXiv Detail & Related papers (2023-11-17T02:37:10Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Momentum Benefits Non-IID Federated Learning Simply and Provably [22.800862422479913]
Federated learning is a powerful paradigm for large-scale machine learning.
FedAvg and SCAFFOLD are two prominent algorithms to address these challenges.
This paper explores the utilization of momentum to enhance the performance of FedAvg and SCAFFOLD.
arXiv Detail & Related papers (2023-06-28T18:52:27Z) - FedPDC:Federated Learning for Public Dataset Correction [1.5533842336139065]
Federated learning has lower classification accuracy than traditional machine learning in Non-IID scenarios.
New algorithm FedPDC is proposed to optimize the aggregation mode of local models and the loss function of local training.
In many benchmark experiments, FedPDC can effectively improve the accuracy of the global model in the case of extremely unbalanced data distribution.
arXiv Detail & Related papers (2023-02-24T08:09:23Z) - FedDRL: Deep Reinforcement Learning-based Adaptive Aggregation for
Non-IID Data in Federated Learning [4.02923738318937]
Uneven distribution of local data across different edge devices (clients) results in slow model training and accuracy reduction in federated learning.
This work introduces a novel non-IID type encountered in real-world datasets, namely cluster-skew.
We propose FedDRL, a novel FL model that employs deep reinforcement learning to adaptively determine each client's impact factor.
arXiv Detail & Related papers (2022-08-04T04:24:16Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - FedGroup: Efficient Clustered Federated Learning via Decomposed
Data-Driven Measure [18.083188787905083]
We propose a novel clustered federated learning (CFL) framework FedGroup.
We show that FedGroup can significantly improve absolute test accuracy by +14.1% on FEMNIST compared to FedAvg.
We also evaluate FedGroup and FedGrouProx (combined with FedProx) on several open datasets.
arXiv Detail & Related papers (2020-10-14T08:15:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.