Federated Learning of a Mixture of Global and Local Models
- URL: http://arxiv.org/abs/2002.05516v3
- Date: Fri, 12 Feb 2021 06:30:47 GMT
- Title: Federated Learning of a Mixture of Global and Local Models
- Authors: Filip Hanzely and Peter Richt\'arik
- Abstract summary: We propose a new optimization formulation for training federated learning models.
We show that local steps can improve communication for problems with heterogeneous data.
In particular, we are the first to i) show that local steps can improve communication for problems with heterogeneous data, and ii) point out that personalization yields reduced communication complexity.
- Score: 10.279748604797911
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new optimization formulation for training federated learning
models. The standard formulation has the form of an empirical risk minimization
problem constructed to find a single global model trained from the private data
stored across all participating devices. In contrast, our formulation seeks an
explicit trade-off between this traditional global model and the local models,
which can be learned by each device from its own private data without any
communication. Further, we develop several efficient variants of SGD (with and
without partial participation and with and without variance reduction) for
solving the new formulation and prove communication complexity guarantees.
Notably, our methods are similar but not identical to federated averaging /
local SGD, thus shedding some light on the role of local steps in federated
learning. In particular, we are the first to i) show that local steps can
improve communication for problems with heterogeneous data, and ii) point out
that personalization yields reduced communication complexity.
Related papers
- Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Personalized Federated Learning via Gradient Modulation for
Heterogeneous Text Summarization [21.825321314169642]
We propose a federated learning text summarization scheme, which allows users to share the global model in a cooperative learning manner without sharing raw data.
FedSUMM can achieve faster model convergence on PFL algorithm for task-specific text summarization.
arXiv Detail & Related papers (2023-04-23T03:18:46Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Exploiting Personalized Invariance for Better Out-of-distribution
Generalization in Federated Learning [13.246981646250518]
This paper presents a general dual-regularized learning framework to explore the personalized invariance, compared with the exsiting personalized federated learning methods.
We show that our method is superior over the existing federated learning and invariant learning methods, in diverse out-of-distribution and Non-IID data cases.
arXiv Detail & Related papers (2022-11-21T08:17:03Z) - FedGen: Generalizable Federated Learning for Sequential Data [8.784435748969806]
In many real-world distributed settings, spurious correlations exist due to biases and data sampling issues.
We present a generalizable federated learning framework called FedGen, which allows clients to identify and distinguish between spurious and invariant features.
We show that FedGen results in models that achieve significantly better generalization and can outperform the accuracy of current federated learning approaches by over 24%.
arXiv Detail & Related papers (2022-11-03T15:48:14Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Decentralised Person Re-Identification with Selective Knowledge
Aggregation [56.40855978874077]
Existing person re-identification (Re-ID) methods mostly follow a centralised learning paradigm which shares all training data to a collection for model learning.
Two recent works have introduced decentralised (federated) Re-ID learning for constructing a globally generalised model (server)
However, these methods are poor on how to adapt the generalised model to maximise its performance on individual client domain Re-ID tasks.
We present a new Selective Knowledge Aggregation approach to decentralised person Re-ID to optimise the trade-off between model personalisation and generalisation.
arXiv Detail & Related papers (2021-10-21T18:09:53Z) - Federated Multi-Task Learning under a Mixture of Distributions [10.00087964926414]
Federated Learning (FL) is a framework for on-device collaborative training of machine learning models.
First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client.
We study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions.
arXiv Detail & Related papers (2021-08-23T15:47:53Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.