PRIOR: Personalized Prior for Reactivating the Information Overlooked in
Federated Learning
- URL: http://arxiv.org/abs/2310.09183v2
- Date: Fri, 10 Nov 2023 09:53:32 GMT
- Title: PRIOR: Personalized Prior for Reactivating the Information Overlooked in
Federated Learning
- Authors: Mingjia Shi, Yuhao Zhou, Kai Wang, Huaizheng Zhang, Shudong Huang,
Qing Ye, Jiangcheng Lv
- Abstract summary: We propose a novel scheme to inject personalized prior knowledge into a global model in each client.
At the heart of our proposed approach is a framework, the PFL with Bregman Divergence (pFedBreD)
Our method reaches the state-of-the-art performances on 5 datasets and outperforms other methods by up to 3.5% across 8 benchmarks.
- Score: 16.344719695572586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classical federated learning (FL) enables training machine learning models
without sharing data for privacy preservation, but heterogeneous data
characteristic degrades the performance of the localized model. Personalized FL
(PFL) addresses this by synthesizing personalized models from a global model
via training on local data. Such a global model may overlook the specific
information that the clients have been sampled. In this paper, we propose a
novel scheme to inject personalized prior knowledge into the global model in
each client, which attempts to mitigate the introduced incomplete information
problem in PFL. At the heart of our proposed approach is a framework, the PFL
with Bregman Divergence (pFedBreD), decoupling the personalized prior from the
local objective function regularized by Bregman divergence for greater
adaptability in personalized scenarios. We also relax the mirror descent (RMD)
to extract the prior explicitly to provide optional strategies. Additionally,
our pFedBreD is backed up by a convergence analysis. Sufficient experiments
demonstrate that our method reaches the state-of-the-art performances on 5
datasets and outperforms other methods by up to 3.5% across 8 benchmarks.
Extensive analyses verify the robustness and necessity of proposed designs.
Related papers
- Personalized Federated Learning with Adaptive Feature Aggregation and Knowledge Transfer [0.0]
Federated Learning (FL) is popular as a privacy-preserving machine learning paradigm for generating a single model on decentralized data.
We propose a new method personalized Federated learning with Adaptive Feature Aggregation and Knowledge Transfer (FedAFK)
We conduct extensive experiments on three datasets in two widely-used heterogeneous settings and show the superior performance of our proposed method over thirteen state-of-the-art baselines.
arXiv Detail & Related papers (2024-10-19T11:32:39Z) - FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - FedPerfix: Towards Partial Model Personalization of Vision Transformers
in Federated Learning [9.950367271170592]
We investigate where and how to partially personalize a Vision Transformers (ViT) model.
Based on the insights that the self-attention layer and the classification head are the most sensitive parts of a ViT, we propose a novel approach called FedPerfix.
We evaluate the proposed approach on CIFAR-100, OrganAMNIST, and Office-Home datasets and demonstrate its effectiveness compared to several advanced PFL methods.
arXiv Detail & Related papers (2023-08-17T19:22:30Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - Personalized Federated Learning with Hidden Information on Personalized
Prior [18.8426865970643]
We propose pFedBreD, a framework to solve the problem we model using Bregman divergence regularization.
Our experiments show that our proposal significantly outcompetes other PFL algorithms on multiple public benchmarks.
arXiv Detail & Related papers (2022-11-19T12:45:19Z) - New Metrics to Evaluate the Performance and Fairness of Personalized
Federated Learning [5.500172106704342]
In Federated Learning (FL), the clients learn a single global model (FedAvg) through a central aggregator.
In this setting, the non-IID distribution of the data across clients restricts the global FL model from delivering good performance on the local data of each client.
Personalized FL aims to address this problem by finding a personalized model for each client.
arXiv Detail & Related papers (2021-07-28T05:30:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.