QuPeD: Quantized Personalization via Distillation with Applications to
Federated Learning
- URL: http://arxiv.org/abs/2107.13892v1
- Date: Thu, 29 Jul 2021 10:55:45 GMT
- Title: QuPeD: Quantized Personalization via Distillation with Applications to
Federated Learning
- Authors: Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi
- Abstract summary: federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server.
We introduce a textitquantized and textitpersonalized FL algorithm QuPeD that facilitates collective (personalized model compression) training.
Numerically, we validate that QuPeD outperforms competing personalized FL methods, FedAvg, and local training of clients in various heterogeneous settings.
- Score: 8.420943739336067
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditionally, federated learning (FL) aims to train a single global model
while collaboratively using multiple clients and a server. Two natural
challenges that FL algorithms face are heterogeneity in data across clients and
collaboration of clients with {\em diverse resources}. In this work, we
introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeD
that facilitates collective (personalized model compression) training via
\textit{knowledge distillation} (KD) among clients who have access to
heterogeneous data and resources. For personalization, we allow clients to
learn \textit{compressed personalized models} with different quantization
parameters and model dimensions/structures. Towards this, first we propose an
algorithm for learning quantized models through a relaxed optimization problem,
where quantization values are also optimized over. When each client
participating in the (federated) learning process has different requirements
for the compressed model (both in model dimension and precision), we formulate
a compressed personalization framework by introducing knowledge distillation
loss for local client objectives collaborating through a global model. We
develop an alternating proximal gradient update for solving this compressed
personalization problem, and analyze its convergence properties. Numerically,
we validate that QuPeD outperforms competing personalized FL methods, FedAvg,
and local training of clients in various heterogeneous settings.
Related papers
- FedSheafHN: Personalized Federated Learning on Graph-structured Data [22.825083541211168]
We propose a model called FedSheafHN, which embeds each client's local subgraph into a server-constructed collaboration graph.
Our model improves the integration and interpretation of complex client characteristics.
It also has fast model convergence and effective new clients generalization.
arXiv Detail & Related papers (2024-05-25T04:51:41Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes [49.22075916259368]
In some real-world applications, data samples are usually distributed on local devices.
In this paper, we focus on a special kind of Non-I.I.D. scene where clients own incomplete classes.
Our proposed algorithm named MAP could simultaneously achieve the aggregation and personalization goals in FL.
arXiv Detail & Related papers (2024-04-14T12:22:42Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - FedDWA: Personalized Federated Learning with Dynamic Weight Adjustment [20.72576355616359]
We propose a new PFL algorithm called emphFedDWA (Federated Learning with Dynamic Weight Adjustment) to address the problem.
FedDWA computes personalized aggregation weights based on collected models from clients.
We conduct extensive experiments using five real datasets and the results demonstrate that FedDWA can significantly reduce the communication traffic and achieve much higher model accuracy than the state-of-the-art approaches.
arXiv Detail & Related papers (2023-05-10T13:12:07Z) - Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives [37.42347737911428]
We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
arXiv Detail & Related papers (2022-08-12T09:51:20Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Parameterized Knowledge Transfer for Personalized Federated Learning [11.223753730705374]
We propose a novel training framework to employ personalized models for different clients.
It is demonstrated that the proposed framework is the first federated learning paradigm that realizes personalized model training.
arXiv Detail & Related papers (2021-11-04T13:41:45Z) - QuPeL: Quantized Personalization with Applications to Federated Learning [8.420943739336067]
In this work, we introduce a textitquantized and textitpersonalized FL algorithm QuPeL that facilitates collective training with heterogeneous clients.
For personalization, we allow clients to learn textitcompressed personalized models with different quantization parameters depending on their resources.
Numerically, we show that optimizing over the quantization levels increases the performance and we validate that QuPeL outperforms both FedAvg and local training of clients in a heterogeneous setting.
arXiv Detail & Related papers (2021-02-23T16:43:51Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.