Federated Residual Learning
- URL: http://arxiv.org/abs/2003.12880v1
- Date: Sat, 28 Mar 2020 19:55:24 GMT
- Title: Federated Residual Learning
- Authors: Alekh Agarwal, John Langford, Chen-Yu Wei
- Abstract summary: We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
- Score: 53.77128418049985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study a new form of federated learning where the clients train
personalized local models and make predictions jointly with the server-side
shared model. Using this new federated learning framework, the complexity of
the central shared model can be minimized while still gaining all the
performance benefits that joint training provides. Our framework is robust to
data heterogeneity, addressing the slow convergence problem traditional
federated learning methods face when the data is non-i.i.d. across clients. We
test the theory empirically and find substantial performance gains over
baselines.
Related papers
- FedSPD: A Soft-clustering Approach for Personalized Decentralized Federated Learning [18.38030098837294]
Federated learning is a framework for distributed clients to collaboratively train a machine learning model using local data.
We propose FedSPD, an efficient personalized federated learning algorithm for the decentralized setting.
We show that FedSPD learns accurate models even in low-connectivity networks.
arXiv Detail & Related papers (2024-10-24T15:48:34Z) - Federated Learning based on Pruning and Recovery [0.0]
This framework integrates asynchronous learning algorithms and pruning techniques.
It addresses the inefficiencies of traditional federated learning algorithms in scenarios involving heterogeneous devices.
It also tackles the staleness issue and inadequate training of certain clients in asynchronous algorithms.
arXiv Detail & Related papers (2024-03-16T14:35:03Z) - How to Collaborate: Towards Maximizing the Generalization Performance in
Cross-Silo Federated Learning [12.86056968708516]
Federated clustering (FL) has vivid attention as a privacy-preserving distributed learning framework.
In this work, we focus on cross-silo FL, where clients become the model owners after FL data.
We formulate that the performance of a client can be improved only by collaborating with other clients that have more training data.
arXiv Detail & Related papers (2024-01-24T05:41:34Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Heterogeneous Ensemble Knowledge Transfer for Training Large Models in
Federated Learning [22.310090483499035]
Federated learning (FL) enables edge-devices to collaboratively learn a model without disclosing their private data to a central aggregating server.
Most existing FL algorithms require models of identical architecture to be deployed across the clients and server.
We propose a novel ensemble knowledge transfer method named Fed-ET in which small models are trained on clients, and used to train a larger model at the server.
arXiv Detail & Related papers (2022-04-27T05:18:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.