FedSelect: Personalized Federated Learning with Customized Selection of Parameters for Fine-Tuning
- URL: http://arxiv.org/abs/2404.02478v1
- Date: Wed, 3 Apr 2024 05:36:21 GMT
- Title: FedSelect: Personalized Federated Learning with Customized Selection of Parameters for Fine-Tuning
- Authors: Rishub Tamirisa, Chulin Xie, Wenxuan Bao, Andy Zhou, Ron Arel, Aviv Shamsian,
- Abstract summary: FedSelect is a novel PFL algorithm inspired by the iterative subnetwork discovery procedure used for the Lottery Ticket Hypothesis.
We show that FedSelect outperforms recent state-of-the-art PFL algorithms under challenging client data heterogeneity settings.
- Score: 9.22574528776347
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Standard federated learning approaches suffer when client data distributions have sufficient heterogeneity. Recent methods addressed the client data heterogeneity issue via personalized federated learning (PFL) - a class of FL algorithms aiming to personalize learned global knowledge to better suit the clients' local data distributions. Existing PFL methods usually decouple global updates in deep neural networks by performing personalization on particular layers (i.e. classifier heads) and global aggregation for the rest of the network. However, preselecting network layers for personalization may result in suboptimal storage of global knowledge. In this work, we propose FedSelect, a novel PFL algorithm inspired by the iterative subnetwork discovery procedure used for the Lottery Ticket Hypothesis. FedSelect incrementally expands subnetworks to personalize client parameters, concurrently conducting global aggregations on the remaining parameters. This approach enables the personalization of both client parameters and subnetwork structure during the training process. Finally, we show that FedSelect outperforms recent state-of-the-art PFL algorithms under challenging client data heterogeneity settings and demonstrates robustness to various real-world distributional shifts. Our code is available at https://github.com/lapisrocks/fedselect.
Related papers
- PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Take Your Pick: Enabling Effective Personalized Federated Learning
within Low-dimensional Feature Space [22.433424519577642]
We propose a novel PFL framework named FedPick.
FedPick achieves PFL in the low-dimensional feature space by selecting task-relevant features adaptively for each client.
arXiv Detail & Related papers (2023-07-26T07:07:27Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - FedSelect: Customized Selection of Parameters for Fine-Tuning during Personalized Federated Learning [0.873811641236639]
We propose a novel FL framework, FedSelect, that directly personalizes both client subnetwork structure and parameters.
We show that this method achieves promising results on CIFAR-10.
arXiv Detail & Related papers (2023-06-23T02:22:04Z) - Personalized Federated Learning on Long-Tailed Data via Adversarial
Feature Augmentation [24.679535905451758]
PFL aims to learn personalized models for each client based on the knowledge across all clients in a privacy-preserving manner.
Existing PFL methods assume that the underlying global data across all clients are uniformly distributed without considering the long-tail distribution.
We propose Federated Learning with Adversarial Feature Augmentation (FedAFA) to address this joint problem in PFL.
arXiv Detail & Related papers (2023-03-27T13:00:20Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Personalized Federated Learning with Exact Stochastic Gradient Descent [15.666401346622575]
We present a new approach for personalized benchmarks that achieves exact updates of gradientSGD descent.
We propose a novel SGD-type scheme where at each optimization round, randomly selected clients perform updates over their client-specific towards optimizing the loss function.
This allows to an exact minimization of the personalized parameters are performed by the clients and those of the common ones.
arXiv Detail & Related papers (2022-02-20T16:11:20Z) - Inference-Time Personalized Federated Learning [17.60724466773559]
Inference-Time PFL (IT-PFL) is where a model trained on a set of clients needs to be later evaluated on novel unlabeled clients at inference time.
We propose a novel approach to this problem IT-PFL-HN, based on a hypernetwork module and an encoder module.
We find that IT-PFL-HN generalizes better than current FL and PFL methods, especially when the novel client has a large domain shift.
arXiv Detail & Related papers (2021-11-16T10:57:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.