Server Averaging for Federated Learning
- URL: http://arxiv.org/abs/2103.11619v1
- Date: Mon, 22 Mar 2021 07:07:00 GMT
- Title: Server Averaging for Federated Learning
- Authors: George Pu, Yanlin Zhou, Dapeng Wu, Xiaolin Li
- Abstract summary: Federated learning allows distributed devices to collectively train a model without sharing or disclosing the local dataset with a central server.
The improved privacy of federated learning also introduces challenges including higher computation and communication costs.
We propose the server averaging algorithm to accelerate convergence.
- Score: 14.846231685735592
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning allows distributed devices to collectively train a model
without sharing or disclosing the local dataset with a central server. The
global model is optimized by training and averaging the model parameters of all
local participants. However, the improved privacy of federated learning also
introduces challenges including higher computation and communication costs. In
particular, federated learning converges slower than centralized training. We
propose the server averaging algorithm to accelerate convergence. Sever
averaging constructs the shared global model by periodically averaging a set of
previous global models. Our experiments indicate that server averaging not only
converges faster, to a target accuracy, than federated averaging (FedAvg), but
also reduces the computation costs on the client-level through epoch decay.
Related papers
- Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - Scheduling and Communication Schemes for Decentralized Federated
Learning [0.31410859223862103]
A decentralized federated learning (DFL) model with the gradient descent (SGD) algorithm has been introduced.
Three scheduling policies for DFL have been proposed for communications between the clients and the parallel servers.
Results show that the proposed scheduling polices have an impact both on the speed of convergence and in the final global model.
arXiv Detail & Related papers (2023-11-27T17:35:28Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Over-The-Air Federated Learning under Byzantine Attacks [43.67333971183711]
Federated learning (FL) is a promising solution to enable many AI applications.
FL allows the clients to participate in the training phase, governed by a central server, without sharing their local data.
One of the main challenges of FL is the communication overhead.
We propose a transmission and aggregation framework to reduce the effect of such attacks.
arXiv Detail & Related papers (2022-05-05T22:09:21Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Federated learning with class imbalance reduction [24.044750119251308]
Federated learning (FL) is a technique that enables a large amount of edge computing devices to collaboratively train a global learning model.
Due to privacy concerns, the raw data on devices could not be available for centralized server.
In this paper, an estimation scheme is designed to reveal the class distribution without the awareness of raw data.
arXiv Detail & Related papers (2020-11-23T08:13:43Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z) - Coded Federated Learning [5.375775284252717]
Federated learning is a method of training a global model from decentralized data distributed across client devices.
Our results show that CFL allows the global model to converge nearly four times faster when compared to an uncoded approach.
arXiv Detail & Related papers (2020-02-21T23:06:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.