Fusion of Global and Local Knowledge for Personalized Federated Learning
- URL: http://arxiv.org/abs/2302.11051v1
- Date: Tue, 21 Feb 2023 23:09:45 GMT
- Title: Fusion of Global and Local Knowledge for Personalized Federated Learning
- Authors: Tiansheng Huang, Li Shen, Yan Sun, Weiwei Lin, Dacheng Tao
- Abstract summary: In this paper, we explore personalized models with low-rank and sparse decomposition.
We propose a two-stage-based algorithm named textbfFederated learning with mixed textbfSparse and textbfRank representation.
Under proper assumptions, we show that the GKR trained by FedSLR can at least sub-linearly converge to a stationary point of the regularized problem.
- Score: 75.20751492913892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized federated learning, as a variant of federated learning, trains
customized models for clients using their heterogeneously distributed data.
However, it is still inconclusive about how to design personalized models with
better representation of shared global knowledge and personalized pattern. To
bridge the gap, we in this paper explore personalized models with low-rank and
sparse decomposition. Specifically, we employ proper regularization to extract
a low-rank global knowledge representation (GKR), so as to distill global
knowledge into a compact representation. Subsequently, we employ a sparse
component over the obtained GKR to fuse the personalized pattern into the
global knowledge. As a solution, we propose a two-stage proximal-based
algorithm named \textbf{Fed}erated learning with mixed \textbf{S}parse and
\textbf{L}ow-\textbf{R}ank representation (FedSLR) to efficiently search for
the mixed models. Theoretically, under proper assumptions, we show that the GKR
trained by FedSLR can at least sub-linearly converge to a stationary point of
the regularized problem, and that the sparse component being fused can converge
to its stationary point under proper settings. Extensive experiments also
demonstrate the superior empirical performance of FedSLR. Moreover, FedSLR
reduces the number of parameters, and lowers the down-link communication
complexity, which are all desirable for federated learning algorithms. Source
code is available in \url{https://github.com/huangtiansheng/fedslr}.
Related papers
- Personalized Federated Learning via Feature Distribution Adaptation [3.410799378893257]
Federated learning (FL) is a distributed learning framework that leverages commonalities between distributed client datasets to train a global model.
personalized federated learning (PFL) seeks to address this by learning individual models tailored to each client.
We propose an algorithm, pFedFDA, that efficiently generates personalized models by adapting global generative classifiers to their local feature distributions.
arXiv Detail & Related papers (2024-11-01T03:03:52Z) - Towards Realistic Long-tailed Semi-supervised Learning in an Open World [0.0]
We construct a more emphRealistic Open-world Long-tailed Semi-supervised Learning (textbfROLSSL) setting where there is no premise on the distribution relationships between known and novel categories.
Under the proposed ROLSSL setting, we propose a simple yet potentially effective solution called dual-stage logit adjustments.
Experiments on datasets such as CIFAR100 and ImageNet100 have demonstrated performance improvements of up to 50.1%.
arXiv Detail & Related papers (2024-05-23T12:53:50Z) - FedSelect: Personalized Federated Learning with Customized Selection of Parameters for Fine-Tuning [9.22574528776347]
FedSelect is a novel PFL algorithm inspired by the iterative subnetwork discovery procedure used for the Lottery Ticket Hypothesis.
We show that FedSelect outperforms recent state-of-the-art PFL algorithms under challenging client data heterogeneity settings.
arXiv Detail & Related papers (2024-04-03T05:36:21Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Exploiting Label Skews in Federated Learning with Model Concatenation [39.38427550571378]
Federated Learning (FL) has emerged as a promising solution to perform deep learning on different data owners without exchanging raw data.
Among different non-IID types, label skews have been challenging and common in image classification and other tasks.
We propose FedConcat, a simple and effective approach that degrades these local models as the base of the global model.
arXiv Detail & Related papers (2023-12-11T10:44:52Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - FedMix: Approximation of Mixup under Mean Augmented Federated Learning [60.503258658382]
Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device.
Current state-of-the-art algorithms suffer from performance degradation as the heterogeneity of local data across clients increases.
We propose a new augmentation algorithm, named FedMix, which is inspired by a phenomenal yet simple data augmentation method, Mixup.
arXiv Detail & Related papers (2021-07-01T06:14:51Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - FedBE: Making Bayesian Model Ensemble Applicable to Federated Learning [23.726336635748783]
Federated learning aims to collaboratively train a strong global model by accessing users' locally trained models but not their own data.
A crucial step is therefore to aggregate local models into a global model, which has been shown challenging when users have non-i.i.d. data.
We propose a novel aggregation algorithm named FedBE, which takes a Bayesian inference perspective by sampling higher-quality global models.
arXiv Detail & Related papers (2020-09-04T01:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.