User-Centric Federated Learning
- URL: http://arxiv.org/abs/2110.09869v1
- Date: Tue, 19 Oct 2021 11:49:06 GMT
- Title: User-Centric Federated Learning
- Authors: Mohamad Mestoukirdi, Matteo Zecchin, David Gesbert, Qianrui Li, and
Nicolas Gresset
- Abstract summary: We propose a broadcast protocol that limits the number of personalized streams while retaining the essential advantages of our learning scheme.
Our approach is shown to enjoy higher personalization capabilities, faster convergence, and better communication efficiency compared to other competing baseline solutions.
- Score: 20.830970477768485
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Data heterogeneity across participating devices poses one of the main
challenges in federated learning as it has been shown to greatly hamper its
convergence time and generalization capabilities. In this work, we address this
limitation by enabling personalization using multiple user-centric aggregation
rules at the parameter server. Our approach potentially produces a personalized
model for each user at the cost of some extra downlink communication overhead.
To strike a trade-off between personalization and communication efficiency, we
propose a broadcast protocol that limits the number of personalized streams
while retaining the essential advantages of our learning scheme. Through
simulation results, our approach is shown to enjoy higher personalization
capabilities, faster convergence, and better communication efficiency compared
to other competing baseline solutions.
Related papers
- PP-TIL: Personalized Planning for Autonomous Driving with Instance-based Transfer Imitation Learning [4.533437433261497]
We propose an instance-based transfer imitation learning approach for personalized motion planning.
We extract the style feature distribution from user demonstrations, constructing the regularization term for the approximation of user style.
Compared to the baseline methods, our approach mitigates the overfitting issue caused by sparse user data.
arXiv Detail & Related papers (2024-07-26T07:51:11Z) - Decentralized Personalized Federated Learning [4.5836393132815045]
We focus on creating a collaboration graph that guides each client in selecting suitable collaborators for training personalized models.
Unlike traditional methods, our formulation identifies collaborators at a granular level by considering greedy relations of clients.
We achieve this through a bi-level optimization framework that employs a constrained algorithm.
arXiv Detail & Related papers (2024-06-10T17:58:48Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - User-Centric Federated Learning: Trading off Wireless Resources for
Personalization [18.38078866145659]
In Federated Learning (FL) systems, Statistical Heterogeneousness increases the algorithm convergence time and reduces the generalization performance.
To tackle the above problems without violating the privacy constraints that FL imposes, personalized FL methods have to couple statistically similar clients without directly accessing their data.
In this work, we design user-centric aggregation rules that are based on readily available gradient information and are capable of producing personalized models for each FL client.
Our algorithm outperforms popular personalized FL baselines in terms of average accuracy, worst node performance, and training communication overhead.
arXiv Detail & Related papers (2023-04-25T15:45:37Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Federated Pruning: Improving Neural Network Efficiency with Federated
Learning [24.36174705715827]
We propose Federated Pruning to train a reduced model under the federated setting.
We explore different pruning schemes and provide empirical evidence of the effectiveness of our methods.
arXiv Detail & Related papers (2022-09-14T00:48:37Z) - On Differential Privacy for Federated Learning in Wireless Systems with
Multiple Base Stations [90.53293906751747]
We consider a federated learning model in a wireless system with multiple base stations and inter-cell interference.
We show the convergence behavior of the learning process by deriving an upper bound on its optimality gap.
Our proposed scheduler improves the average accuracy of the predictions compared with a random scheduler.
arXiv Detail & Related papers (2022-08-25T03:37:11Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.