Bold but Cautious: Unlocking the Potential of Personalized Federated
Learning through Cautiously Aggressive Collaboration
- URL: http://arxiv.org/abs/2309.11103v1
- Date: Wed, 20 Sep 2023 07:17:28 GMT
- Title: Bold but Cautious: Unlocking the Potential of Personalized Federated
Learning through Cautiously Aggressive Collaboration
- Authors: Xinghao Wu, Xuefeng Liu, Jianwei Niu, Guogang Zhu, Shaojie Tang
- Abstract summary: Key question in personalized federated learning (PFL) is to decide which parameters of a client should be localized or shared with others.
This paper introduces a novel guideline for client collaboration in PFL.
We propose a new PFL method named FedCAC, which employs a quantitative metric to evaluate each parameter's sensitivity to non-IID data.
- Score: 13.857939196296742
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized federated learning (PFL) reduces the impact of non-independent
and identically distributed (non-IID) data among clients by allowing each
client to train a personalized model when collaborating with others. A key
question in PFL is to decide which parameters of a client should be localized
or shared with others. In current mainstream approaches, all layers that are
sensitive to non-IID data (such as classifier layers) are generally
personalized. The reasoning behind this approach is understandable, as
localizing parameters that are easily influenced by non-IID data can prevent
the potential negative effect of collaboration. However, we believe that this
approach is too conservative for collaboration. For example, for a certain
client, even if its parameters are easily influenced by non-IID data, it can
still benefit by sharing these parameters with clients having similar data
distribution. This observation emphasizes the importance of considering not
only the sensitivity to non-IID data but also the similarity of data
distribution when determining which parameters should be localized in PFL. This
paper introduces a novel guideline for client collaboration in PFL. Unlike
existing approaches that prohibit all collaboration of sensitive parameters,
our guideline allows clients to share more parameters with others, leading to
improved model performance. Additionally, we propose a new PFL method named
FedCAC, which employs a quantitative metric to evaluate each parameter's
sensitivity to non-IID data and carefully selects collaborators based on this
evaluation. Experimental results demonstrate that FedCAC enables clients to
share more parameters with others, resulting in superior performance compared
to state-of-the-art methods, particularly in scenarios where clients have
diverse distributions.
Related papers
- Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients [8.773068878015856]
Federated learning (FL) is an appealing paradigm that allows a group of machines (a.k.a. clients) to learn collectively while keeping their data local.
We consider an FL setting where some clients can be adversarial, and we derive conditions under which full collaboration fails.
arXiv Detail & Related papers (2024-09-30T14:31:19Z) - The Diversity Bonus: Learning from Dissimilar Distributed Clients in Personalized Federated Learning [20.3260485904085]
We propose DiversiFed which allows each client to learn from clients with diversified data distribution.
We show that DiversiFed can benefit from dissimilar clients and thus outperform the state-of-the-art methods.
arXiv Detail & Related papers (2024-07-22T08:24:45Z) - Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-Rank Decomposition [26.218506124446826]
Key strategy of Personalized Federated Learning is to decouple general knowledge (shared among clients) and client-specific knowledge.
We introduce FedDecomp, a simple but effective PFL paradigm that employs parameter decomposition additive to address this issue.
Experimental results across multiple datasets and varying degrees of data demonstrate that FedDecomp outperforms state-of-the-art methods up to 4.9%.
arXiv Detail & Related papers (2024-06-28T14:01:22Z) - Personalized Federated Learning with Attention-based Client Selection [57.71009302168411]
We propose FedACS, a new PFL algorithm with an Attention-based Client Selection mechanism.
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions.
Experiments on CIFAR10 and FMNIST validate FedACS's superiority.
arXiv Detail & Related papers (2023-12-23T03:31:46Z) - DCFL: Non-IID awareness Data Condensation aided Federated Learning [0.8158530638728501]
Federated learning is a decentralized learning paradigm wherein a central server trains a global model iteratively by utilizing clients who possess a certain amount of private datasets.
The challenge lies in the fact that the client side private data may not be identically and independently distributed.
We propose DCFL which divides clients into groups by using the Centered Kernel Alignment (CKA) method, then uses dataset condensation methods with non-IID awareness to complete clients.
arXiv Detail & Related papers (2023-12-21T13:04:24Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling
and Correction [48.85303253333453]
Federated learning (FL) allows multiple clients to collectively train a high-performance global model without sharing their private data.
We propose a novel federated learning algorithm with local drift decoupling and correction (FedDC)
Our FedDC only introduces lightweight modifications in the local training phase, in which each client utilizes an auxiliary local drift variable to track the gap between the local model parameter and the global model parameters.
Experiment results and analysis demonstrate that FedDC yields expediting convergence and better performance on various image classification tasks.
arXiv Detail & Related papers (2022-03-22T14:06:26Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.