Achieving Personalized Federated Learning with Sparse Local Models
- URL: http://arxiv.org/abs/2201.11380v1
- Date: Thu, 27 Jan 2022 08:43:11 GMT
- Title: Achieving Personalized Federated Learning with Sparse Local Models
- Authors: Tiansheng Huang, Shiwei Liu, Li Shen, Fengxiang He, Weiwei Lin, and
Dacheng Tao
- Abstract summary: Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
- Score: 75.76854544460981
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is vulnerable to heterogeneously distributed data,
since a common global model in FL may not adapt to the heterogeneous data
distribution of each user. To counter this issue, personalized FL (PFL) was
proposed to produce dedicated local models for each individual user. However,
PFL is far from its maturity, because existing PFL solutions either demonstrate
unsatisfactory generalization towards different model architectures or cost
enormous extra computation and memory. In this work, we propose federated
learning with personalized sparse mask (FedSpa), a novel PFL scheme that
employs personalized sparse masks to customize sparse local models on the edge.
Instead of training an intact (or dense) PFL model, FedSpa only maintains a
fixed number of active parameters throughout training (aka sparse-to-sparse
training), which enables users' models to achieve personalization with cheap
communication, computation, and memory cost. We theoretically show that the
iterates obtained by FedSpa converge to the local minimizer of the formulated
SPFL problem at rate of $\mathcal{O}(\frac{1}{\sqrt{T}})$. Comprehensive
experiments demonstrate that FedSpa significantly saves communication and
computation costs, while simultaneously achieves higher model accuracy and
faster convergence speed against several state-of-the-art PFL methods.
Related papers
- Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - pFedLoRA: Model-Heterogeneous Personalized Federated Learning with LoRA
Tuning [35.59830784463706]
Federated learning (FL) is an emerging machine learning paradigm in which a central server coordinates multiple participants (clients) collaboratively to train on decentralized data.
We propose a novel and efficient model-heterogeneous personalized Federated learning framework based on LoRA tuning (pFedLoRA)
Experiments on two benchmark datasets demonstrate that pFedLoRA outperforms six state-of-the-art baselines.
arXiv Detail & Related papers (2023-10-20T05:24:28Z) - DFedADMM: Dual Constraints Controlled Model Inconsistency for
Decentralized Federated Learning [52.83811558753284]
Decentralized learning (DFL) discards the central server and establishes a decentralized communication network.
Existing DFL methods still suffer from two major challenges: local inconsistency and local overfitting.
arXiv Detail & Related papers (2023-08-16T11:22:36Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - Personalized Federated Learning on Long-Tailed Data via Adversarial
Feature Augmentation [24.679535905451758]
PFL aims to learn personalized models for each client based on the knowledge across all clients in a privacy-preserving manner.
Existing PFL methods assume that the underlying global data across all clients are uniformly distributed without considering the long-tail distribution.
We propose Federated Learning with Adversarial Feature Augmentation (FedAFA) to address this joint problem in PFL.
arXiv Detail & Related papers (2023-03-27T13:00:20Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - Improving the Model Consistency of Decentralized Federated Learning [68.2795379609854]
Federated Learning (FL) discards the central server and each client only communicates with its neighbors in a decentralized communication network.
Existing DFL suffers from inconsistency among local clients, which results in inferior compared to FLFL.
We propose DFedSAMMGS, where $1lambda$ is the spectral gossip matrix and $Q$ is the number of sparse data gaps.
arXiv Detail & Related papers (2023-02-08T14:37:34Z) - Sparse Federated Learning with Hierarchical Personalized Models [24.763028713043468]
Federated learning (FL) can achieve privacy-safe and reliable collaborative training without collecting users' private data.
We propose a personalized FL algorithm using a hierarchical proximal mapping based on the moreau envelop, named sparse federated learning with hierarchical personalized models (sFedHP)
A continuously differentiable approximated L1-norm is also used as the sparse constraint to reduce the communication cost.
arXiv Detail & Related papers (2022-03-25T09:06:42Z) - Personalized Federated Learning with Clustered Generalization [16.178571176116073]
We study the recent emerging personalized learning (PFL) that aims at dealing with the challenging problem of Non-I.I.D. data in the learning setting.
Key difference between PFL and conventional FL methods in the training target.
We propose a novel concept called clustered generalization to handle the challenge of statistical heterogeneity in FL.
arXiv Detail & Related papers (2021-06-24T14:17:00Z) - PFL-MoE: Personalized Federated Learning Based on Mixture of Experts [1.8757823231879849]
Federated learning (FL) avoids data sharing among training nodes so as to protect data privacy.
PFL-MoE is a generic approach and can be instantiated by integrating existing PFL algorithms.
We demonstrate the effectiveness of PFL-MoE by training the LeNet-5 and VGG-16 models on the Fashion-MNIST datasets.
arXiv Detail & Related papers (2020-12-31T12:51:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.