GPFL: Simultaneously Learning Global and Personalized Feature
Information for Personalized Federated Learning
- URL: http://arxiv.org/abs/2308.10279v3
- Date: Sat, 14 Oct 2023 03:20:04 GMT
- Title: GPFL: Simultaneously Learning Global and Personalized Feature
Information for Personalized Federated Learning
- Authors: Jianqing Zhang, Yang Hua, Hao Wang, Tao Song, Zhengui Xue, Ruhui Ma,
Jian Cao, Haibing Guan
- Abstract summary: We propose a new pFL method, named GPFL, to simultaneously learn global and personalized feature information on each client.
GPFL mitigates overfitting and outperforms the baselines by up to 8.99% in accuracy.
- Score: 32.884949308979465
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning (FL) is popular for its privacy-preserving and
collaborative learning capabilities. Recently, personalized FL (pFL) has
received attention for its ability to address statistical heterogeneity and
achieve personalization in FL. However, from the perspective of feature
extraction, most existing pFL methods only focus on extracting global or
personalized feature information during local training, which fails to meet the
collaborative learning and personalization goals of pFL. To address this, we
propose a new pFL method, named GPFL, to simultaneously learn global and
personalized feature information on each client. We conduct extensive
experiments on six datasets in three statistically heterogeneous settings and
show the superiority of GPFL over ten state-of-the-art methods regarding
effectiveness, scalability, fairness, stability, and privacy. Besides, GPFL
mitigates overfitting and outperforms the baselines by up to 8.99% in accuracy.
Related papers
- Personalized Federated Learning with Adaptive Feature Aggregation and Knowledge Transfer [0.0]
Federated Learning (FL) is popular as a privacy-preserving machine learning paradigm for generating a single model on decentralized data.
We propose a new method personalized Federated learning with Adaptive Feature Aggregation and Knowledge Transfer (FedAFK)
We conduct extensive experiments on three datasets in two widely-used heterogeneous settings and show the superior performance of our proposed method over thirteen state-of-the-art baselines.
arXiv Detail & Related papers (2024-10-19T11:32:39Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Unlocking the Potential of Prompt-Tuning in Bridging Generalized and
Personalized Federated Learning [49.72857433721424]
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks.
We present a novel algorithm, SGPT, that integrates Generalized FL (GFL) and Personalized FL (PFL) approaches by employing a unique combination of both shared and group-specific prompts.
arXiv Detail & Related papers (2023-10-27T17:22:09Z) - PPFL: A Personalized Federated Learning Framework for Heterogeneous
Population [30.51508591732483]
We develop a flexible and interpretable personalized framework within the paradigm of Federated Learning, called PPFL.
By leveraging canonical models, it models the heterogeneity as clients' preferences for these vectors and employs membership preferences.
We conduct experiments on both pathological characteristics and practical datasets, and the results validate the effectiveness of PPFL.
arXiv Detail & Related papers (2023-10-22T16:06:27Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Personalized Federated Learning on Long-Tailed Data via Adversarial
Feature Augmentation [24.679535905451758]
PFL aims to learn personalized models for each client based on the knowledge across all clients in a privacy-preserving manner.
Existing PFL methods assume that the underlying global data across all clients are uniformly distributed without considering the long-tail distribution.
We propose Federated Learning with Adversarial Feature Augmentation (FedAFA) to address this joint problem in PFL.
arXiv Detail & Related papers (2023-03-27T13:00:20Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z) - FedNLP: A Research Platform for Federated Learning in Natural Language
Processing [55.01246123092445]
We present the FedNLP, a research platform for federated learning in NLP.
FedNLP supports various popular task formulations in NLP such as text classification, sequence tagging, question answering, seq2seq generation, and language modeling.
Preliminary experiments with FedNLP reveal that there exists a large performance gap between learning on decentralized and centralized datasets.
arXiv Detail & Related papers (2021-04-18T11:04:49Z) - Towards Personalized Federated Learning [20.586573091790665]
We present a unique taxonomy dividing PFL techniques into data-based and model-based approaches.
We highlight their key ideas, and envision promising future trajectories of research towards new PFL architectural design.
arXiv Detail & Related papers (2021-03-01T02:45:19Z) - PFL-MoE: Personalized Federated Learning Based on Mixture of Experts [1.8757823231879849]
Federated learning (FL) avoids data sharing among training nodes so as to protect data privacy.
PFL-MoE is a generic approach and can be instantiated by integrating existing PFL algorithms.
We demonstrate the effectiveness of PFL-MoE by training the LeNet-5 and VGG-16 models on the Fashion-MNIST datasets.
arXiv Detail & Related papers (2020-12-31T12:51:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.