QuPeL: Quantized Personalization with Applications to Federated Learning
- URL: http://arxiv.org/abs/2102.11786v1
- Date: Tue, 23 Feb 2021 16:43:51 GMT
- Title: QuPeL: Quantized Personalization with Applications to Federated Learning
- Authors: Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi
- Abstract summary: In this work, we introduce a textitquantized and textitpersonalized FL algorithm QuPeL that facilitates collective training with heterogeneous clients.
For personalization, we allow clients to learn textitcompressed personalized models with different quantization parameters depending on their resources.
Numerically, we show that optimizing over the quantization levels increases the performance and we validate that QuPeL outperforms both FedAvg and local training of clients in a heterogeneous setting.
- Score: 8.420943739336067
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditionally, federated learning (FL) aims to train a single global model
while collaboratively using multiple clients and a server. Two natural
challenges that FL algorithms face are heterogeneity in data across clients and
collaboration of clients with {\em diverse resources}. In this work, we
introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeL
that facilitates collective training with heterogeneous clients while
respecting resource diversity. For personalization, we allow clients to learn
\textit{compressed personalized models} with different quantization parameters
depending on their resources. Towards this, first we propose an algorithm for
learning quantized models through a relaxed optimization problem, where
quantization values are also optimized over. When each client participating in
the (federated) learning process has different requirements of the quantized
model (both in value and precision), we formulate a quantized personalization
framework by introducing a penalty term for local client objectives against a
globally trained model to encourage collaboration. We develop an alternating
proximal gradient update for solving this quantized personalization problem,
and we analyze its convergence properties. Numerically, we show that optimizing
over the quantization levels increases the performance and we validate that
QuPeL outperforms both FedAvg and local training of clients in a heterogeneous
setting.
Related papers
- Personalized Hierarchical Split Federated Learning in Wireless Networks [24.664469755746463]
We propose a personalized hierarchical split federated learning (PHSFL) algorithm that is specially designed to achieve better personalization performance.
We first perform extensive theoretical analysis to understand the impact of model splitting and hierarchical model aggregations on the global model.
Once the global model is trained, we fine-tune each client to obtain the personalized models.
arXiv Detail & Related papers (2024-11-09T02:41:53Z) - Personalized Quantum Federated Learning for Privacy Image Classification [52.04404538764307]
A personalized quantum federated learning algorithm is proposed to enhance the personality of the client model in the case of an imbalanced distribution of images.
The experimental results indicate that the personalized quantum federated learning algorithm can obtain global and local models with excellent performance.
arXiv Detail & Related papers (2024-10-03T14:53:04Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Personalized Federated Learning via Amortized Bayesian Meta-Learning [21.126405589760367]
We introduce a new perspective on personalized federated learning through Amortized Bayesian Meta-Learning.
Specifically, we propose a novel algorithm called emphFedABML, which employs hierarchical variational inference across clients.
Our theoretical analysis provides an upper bound on the average generalization error and guarantees the generalization performance on unseen data.
arXiv Detail & Related papers (2023-07-05T11:58:58Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives [37.42347737911428]
We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
arXiv Detail & Related papers (2022-08-12T09:51:20Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Parameterized Knowledge Transfer for Personalized Federated Learning [11.223753730705374]
We propose a novel training framework to employ personalized models for different clients.
It is demonstrated that the proposed framework is the first federated learning paradigm that realizes personalized model training.
arXiv Detail & Related papers (2021-11-04T13:41:45Z) - QuPeD: Quantized Personalization via Distillation with Applications to
Federated Learning [8.420943739336067]
federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server.
We introduce a textitquantized and textitpersonalized FL algorithm QuPeD that facilitates collective (personalized model compression) training.
Numerically, we validate that QuPeD outperforms competing personalized FL methods, FedAvg, and local training of clients in various heterogeneous settings.
arXiv Detail & Related papers (2021-07-29T10:55:45Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.