Take Your Pick: Enabling Effective Personalized Federated Learning
within Low-dimensional Feature Space
- URL: http://arxiv.org/abs/2307.13995v1
- Date: Wed, 26 Jul 2023 07:07:27 GMT
- Title: Take Your Pick: Enabling Effective Personalized Federated Learning
within Low-dimensional Feature Space
- Authors: Guogang Zhu, Xuefeng Liu, Shaojie Tang, Jianwei Niu, Xinghao Wu,
Jiaxing Shen
- Abstract summary: We propose a novel PFL framework named FedPick.
FedPick achieves PFL in the low-dimensional feature space by selecting task-relevant features adaptively for each client.
- Score: 22.433424519577642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized federated learning (PFL) is a popular framework that allows
clients to have different models to address application scenarios where
clients' data are in different domains. The typical model of a client in PFL
features a global encoder trained by all clients to extract universal features
from the raw data and personalized layers (e.g., a classifier) trained using
the client's local data. Nonetheless, due to the differences between the data
distributions of different clients (aka, domain gaps), the universal features
produced by the global encoder largely encompass numerous components irrelevant
to a certain client's local task. Some recent PFL methods address the above
problem by personalizing specific parameters within the encoder. However, these
methods encounter substantial challenges attributed to the high dimensionality
and non-linearity of neural network parameter space. In contrast, the feature
space exhibits a lower dimensionality, providing greater intuitiveness and
interpretability as compared to the parameter space. To this end, we propose a
novel PFL framework named FedPick. FedPick achieves PFL in the low-dimensional
feature space by selecting task-relevant features adaptively for each client
from the features generated by the global encoder based on its local data
distribution. It presents a more accessible and interpretable implementation of
PFL compared to those methods working in the parameter space. Extensive
experimental results show that FedPick could effectively select task-relevant
features for each client and improve model performance in cross-domain FL.
Related papers
- Tackling Feature-Classifier Mismatch in Federated Learning via Prompt-Driven Feature Transformation [12.19025665853089]
In traditional Federated Learning approaches, the global model underperforms when faced with data heterogeneity.
We propose a new PFL framework called FedPFT to address the mismatch problem while enhancing the quality of the feature extractor.
Our experiments demonstrate that FedPFT outperforms state-of-the-art methods by up to 7.08%.
arXiv Detail & Related papers (2024-07-23T02:52:52Z) - FedSelect: Personalized Federated Learning with Customized Selection of Parameters for Fine-Tuning [9.22574528776347]
FedSelect is a novel PFL algorithm inspired by the iterative subnetwork discovery procedure used for the Lottery Ticket Hypothesis.
We show that FedSelect outperforms recent state-of-the-art PFL algorithms under challenging client data heterogeneity settings.
arXiv Detail & Related papers (2024-04-03T05:36:21Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - FedSelect: Customized Selection of Parameters for Fine-Tuning during Personalized Federated Learning [0.873811641236639]
We propose a novel FL framework, FedSelect, that directly personalizes both client subnetwork structure and parameters.
We show that this method achieves promising results on CIFAR-10.
arXiv Detail & Related papers (2023-06-23T02:22:04Z) - Personalized Federated Learning on Long-Tailed Data via Adversarial
Feature Augmentation [24.679535905451758]
PFL aims to learn personalized models for each client based on the knowledge across all clients in a privacy-preserving manner.
Existing PFL methods assume that the underlying global data across all clients are uniformly distributed without considering the long-tail distribution.
We propose Federated Learning with Adversarial Feature Augmentation (FedAFA) to address this joint problem in PFL.
arXiv Detail & Related papers (2023-03-27T13:00:20Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - Subspace based Federated Unlearning [75.90552823500633]
Federated unlearning (FL) aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten.
Most existing federated unlearning algorithms require the server to store the history of the parameter updates.
We propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent.
arXiv Detail & Related papers (2023-02-24T04:29:44Z) - Personalized Federated Learning with Multi-branch Architecture [0.0]
Federated learning (FL) enables multiple clients to collaboratively train models without requiring clients to reveal their raw data to each other.
We propose a new PFL method (pFedMB) using multi-branch architecture, which achieves personalization by splitting each layer of a neural network into multiple branches and assigning client-specific weights to each branch.
We experimentally show that pFedMB performs better than the state-of-the-art PFL methods using the CIFAR10 and CIFAR100 datasets.
arXiv Detail & Related papers (2022-11-15T06:30:57Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.