FediOS: Decoupling Orthogonal Subspaces for Personalization in
Feature-skew Federated Learning
- URL: http://arxiv.org/abs/2311.18559v1
- Date: Thu, 30 Nov 2023 13:50:38 GMT
- Title: FediOS: Decoupling Orthogonal Subspaces for Personalization in
Feature-skew Federated Learning
- Authors: Lingzhi Gao, Zexi Li, Yang Lu, Chao Wu
- Abstract summary: In personalized federated learning (pFL), clients may have heterogeneous (also known as non-IID) data.
In FediOS, we reformulate the decoupling into two feature extractors (generic and personalized) and one shared prediction head.
In addition, a shared prediction head is trained to balance the importance of generic and personalized features during inference.
- Score: 6.076894295435773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized federated learning (pFL) enables collaborative training among
multiple clients to enhance the capability of customized local models. In pFL,
clients may have heterogeneous (also known as non-IID) data, which poses a key
challenge in how to decouple the data knowledge into generic knowledge for
global sharing and personalized knowledge for preserving local personalization.
A typical way of pFL focuses on label distribution skew, and they adopt a
decoupling scheme where the model is split into a common feature extractor and
two prediction heads (generic and personalized). However, such a decoupling
scheme cannot solve the essential problem of feature skew heterogeneity,
because a common feature extractor cannot decouple the generic and personalized
features. Therefore, in this paper, we rethink the architecture decoupling
design for feature-skew pFL and propose an effective pFL method called FediOS.
In FediOS, we reformulate the decoupling into two feature extractors (generic
and personalized) and one shared prediction head. Orthogonal projections are
used for clients to map the generic features into one common subspace and
scatter the personalized features into different subspaces to achieve
decoupling for them. In addition, a shared prediction head is trained to
balance the importance of generic and personalized features during inference.
Extensive experiments on four vision datasets demonstrate our method reaches
state-of-the-art pFL performances under feature skew heterogeneity.
Related papers
- Personalized Federated Learning with Adaptive Feature Aggregation and Knowledge Transfer [0.0]
Federated Learning (FL) is popular as a privacy-preserving machine learning paradigm for generating a single model on decentralized data.
We propose a new method personalized Federated learning with Adaptive Feature Aggregation and Knowledge Transfer (FedAFK)
We conduct extensive experiments on three datasets in two widely-used heterogeneous settings and show the superior performance of our proposed method over thirteen state-of-the-art baselines.
arXiv Detail & Related papers (2024-10-19T11:32:39Z) - Tackling Feature-Classifier Mismatch in Federated Learning via Prompt-Driven Feature Transformation [12.19025665853089]
In traditional Federated Learning approaches, the global model underperforms when faced with data heterogeneity.
We propose a new PFL framework called FedPFT to address the mismatch problem while enhancing the quality of the feature extractor.
Our experiments demonstrate that FedPFT outperforms state-of-the-art methods by up to 7.08%.
arXiv Detail & Related papers (2024-07-23T02:52:52Z) - pFedAFM: Adaptive Feature Mixture for Batch-Level Personalization in Heterogeneous Federated Learning [34.01721941230425]
We propose a model-heterogeneous personalized Federated learning approach with Adaptive Feature Mixture (pFedAFM) for supervised learning tasks.
It significantly outperforms 7 state-of-the-art MHPFL methods, achieving up to 7.93% accuracy improvement.
arXiv Detail & Related papers (2024-04-27T09:52:59Z) - FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity [82.5448598805968]
We present an effective and adaptable federated framework FedP3, representing Federated Personalized and Privacy-friendly network Pruning.
We offer a theoretical interpretation of FedP3 and its locally differential-private variant, DP-FedP3, and theoretically validate their efficiencies.
arXiv Detail & Related papers (2024-04-15T14:14:05Z) - Spectral Co-Distillation for Personalized Federated Learning [69.97016362754319]
We propose a novel distillation method based on model spectrum information to better capture generic versus personalized representations.
We also introduce a co-distillation framework that establishes a two-way bridge between generic and personalized model training.
We demonstrate the outperformance and efficacy of our proposed spectral co-distillation method, as well as our wait-free training protocol.
arXiv Detail & Related papers (2024-01-29T16:01:38Z) - Fed-CO2: Cooperation of Online and Offline Models for Severe Data
Heterogeneity in Federated Learning [14.914477928398133]
Federated Learning (FL) has emerged as a promising distributed learning paradigm.
The effectiveness of FL is highly dependent on the quality of the data that is being used for training.
We propose Fed-CO$_2$, a universal FL framework that handles both label distribution skew and feature skew.
arXiv Detail & Related papers (2023-12-21T15:12:12Z) - Unlocking the Potential of Prompt-Tuning in Bridging Generalized and
Personalized Federated Learning [49.72857433721424]
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks.
We present a novel algorithm, SGPT, that integrates Generalized FL (GFL) and Personalized FL (PFL) approaches by employing a unique combination of both shared and group-specific prompts.
arXiv Detail & Related papers (2023-10-27T17:22:09Z) - PPFL: A Personalized Federated Learning Framework for Heterogeneous
Population [30.51508591732483]
We develop a flexible and interpretable personalized framework within the paradigm of Federated Learning, called PPFL.
By leveraging canonical models, it models the heterogeneity as clients' preferences for these vectors and employs membership preferences.
We conduct experiments on both pathological characteristics and practical datasets, and the results validate the effectiveness of PPFL.
arXiv Detail & Related papers (2023-10-22T16:06:27Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.