Adaptive Federated Learning with Auto-Tuned Clients
- URL: http://arxiv.org/abs/2306.11201v3
- Date: Thu, 2 May 2024 15:44:36 GMT
- Title: Adaptive Federated Learning with Auto-Tuned Clients
- Authors: Junhyung Lyle Kim, Mohammad Taha Toghani, César A. Uribe, Anastasios Kyrillidis,
- Abstract summary: Federated learning (FL) is a distributed machine learning framework where the global model of a central server is trained via multiple collaborative steps by participating clients without sharing their data.
We propose $Delta$-SGD, a simple step size rule for SGD that enables each client to use its own step size by adapting to the local smoothness of the function each client is optimizing.
- Score: 8.868957280690832
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a distributed machine learning framework where the global model of a central server is trained via multiple collaborative steps by participating clients without sharing their data. While being a flexible framework, where the distribution of local data, participation rate, and computing power of each client can greatly vary, such flexibility gives rise to many new challenges, especially in the hyperparameter tuning on the client side. We propose $\Delta$-SGD, a simple step size rule for SGD that enables each client to use its own step size by adapting to the local smoothness of the function each client is optimizing. We provide theoretical and empirical results where the benefit of the client adaptivity is shown in various FL scenarios.
Related papers
- Efficient Model Personalization in Federated Learning via
Client-Specific Prompt Generation [38.42808389088285]
Federated learning (FL) emerges as a decentralized learning framework which trains models from multiple distributed clients without sharing their data to preserve privacy.
We propose a novel personalized FL framework of client-specific Prompt Generation (pFedPG)
pFedPG learns to deploy a personalized prompt generator at the server for producing client-specific visual prompts that efficiently adapts frozen backbones to local data distributions.
arXiv Detail & Related papers (2023-08-29T15:03:05Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Personalized Federated Learning with Multi-branch Architecture [0.0]
Federated learning (FL) enables multiple clients to collaboratively train models without requiring clients to reveal their raw data to each other.
We propose a new PFL method (pFedMB) using multi-branch architecture, which achieves personalization by splitting each layer of a neural network into multiple branches and assigning client-specific weights to each branch.
We experimentally show that pFedMB performs better than the state-of-the-art PFL methods using the CIFAR10 and CIFAR100 datasets.
arXiv Detail & Related papers (2022-11-15T06:30:57Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - FRAug: Tackling Federated Learning with Non-IID Features via
Representation Augmentation [31.12851987342467]
Federated Learning (FL) is a decentralized learning paradigm in which multiple clients collaboratively train deep learning models.
We propose Federated Representation Augmentation (FRAug) to tackle this practical and challenging problem.
Our approach generates synthetic client-specific samples in the embedding space to augment the usually small client datasets.
arXiv Detail & Related papers (2022-05-30T07:43:42Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Federated Noisy Client Learning [105.00756772827066]
Federated learning (FL) collaboratively aggregates a shared global model depending on multiple local clients.
Standard FL methods ignore the noisy client issue, which may harm the overall performance of the aggregated model.
We propose Federated Noisy Client Learning (Fed-NCL), which is a plug-and-play algorithm and contains two main components.
arXiv Detail & Related papers (2021-06-24T11:09:17Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z) - Client Adaptation improves Federated Learning with Simulated Non-IID
Clients [1.0896567381206714]
We present a federated learning approach for learning a client adaptable, robust model when data is non-identically and non-independently distributed (non-IID) across clients.
We show that adding learned client-specific conditioning improves model performance, and the approach is shown to work on balanced and imbalanced data set from both audio and image domains.
arXiv Detail & Related papers (2020-07-09T13:48:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.