Addressing Skewed Heterogeneity via Federated Prototype Rectification with Personalization
- URL: http://arxiv.org/abs/2408.07966v2
- Date: Fri, 23 Aug 2024 02:03:54 GMT
- Title: Addressing Skewed Heterogeneity via Federated Prototype Rectification with Personalization
- Authors: Shunxin Guo, Hongsong Wang, Shuxia Lin, Zhiqiang Kou, Xin Geng,
- Abstract summary: Federated learning is an efficient framework designed to facilitate collaborative model training across multiple distributed devices.
A significant challenge of federated learning is data-level heterogeneity, i.e., skewed or long-tailed distribution of private data.
We propose a novel Federated Prototype Rectification with Personalization which consists of two parts: Federated Personalization and Federated Prototype Rectification.
- Score: 35.48757125452761
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning is an efficient framework designed to facilitate collaborative model training across multiple distributed devices while preserving user data privacy. A significant challenge of federated learning is data-level heterogeneity, i.e., skewed or long-tailed distribution of private data. Although various methods have been proposed to address this challenge, most of them assume that the underlying global data is uniformly distributed across all clients. This paper investigates data-level heterogeneity federated learning with a brief review and redefines a more practical and challenging setting called Skewed Heterogeneous Federated Learning (SHFL). Accordingly, we propose a novel Federated Prototype Rectification with Personalization which consists of two parts: Federated Personalization and Federated Prototype Rectification. The former aims to construct balanced decision boundaries between dominant and minority classes based on private data, while the latter exploits both inter-class discrimination and intra-class consistency to rectify empirical prototypes. Experiments on three popular benchmarks show that the proposed approach outperforms current state-of-the-art methods and achieves balanced performance in both personalization and generalization.
Related papers
- Comparative Evaluation of Clustered Federated Learning Method [0.5242869847419834]
Clustered Federated Learning (CFL) aims to partition clients into groups where the distribution are homogeneous.
In this paper, we explore the performance of two state-of-theart CFL algorithms with respect to a proposed taxonomy of data heterogeneities in federated learning (FL)
Our objective is to provide a clearer understanding of the relationship between CFL performances and data heterogeneous scenarios.
arXiv Detail & Related papers (2024-10-18T07:01:56Z) - Dynamic Heterogeneous Federated Learning with Multi-Level Prototypes [45.13348636579529]
We study the new task, i.e., Dynamic Heterogeneous Federated Learning (DHFL), which addresses the practical scenario where heterogeneous data distributions exist among different clients and dynamic tasks within the client.
To mitigate concept drift, we construct prototypes and semantic prototypes to provide fruitful generalization knowledge and ensure the continuity of prototype spaces.
Extensive experiments show that the proposed method achieves state-of-the-art performance in various settings.
arXiv Detail & Related papers (2023-12-15T15:28:25Z) - Federated Learning Empowered by Generative Content [55.576885852501775]
Federated learning (FL) enables leveraging distributed private data for model training in a privacy-preserving way.
We propose a novel FL framework termed FedGC, designed to mitigate data heterogeneity issues by diversifying private data with generative content.
We conduct a systematic empirical study on FedGC, covering diverse baselines, datasets, scenarios, and modalities.
arXiv Detail & Related papers (2023-12-10T07:38:56Z) - Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - FedABC: Targeting Fair Competition in Personalized Federated Learning [76.9646903596757]
Federated learning aims to collaboratively train models without accessing their client's local private data.
We propose a novel and generic PFL framework termed Federated Averaging via Binary Classification, dubbed FedABC.
In particular, we adopt the one-vs-all'' training strategy in each client to alleviate the unfair competition between classes.
arXiv Detail & Related papers (2023-02-15T03:42:59Z) - Exploiting Personalized Invariance for Better Out-of-distribution
Generalization in Federated Learning [13.246981646250518]
This paper presents a general dual-regularized learning framework to explore the personalized invariance, compared with the exsiting personalized federated learning methods.
We show that our method is superior over the existing federated learning and invariant learning methods, in diverse out-of-distribution and Non-IID data cases.
arXiv Detail & Related papers (2022-11-21T08:17:03Z) - Private Set Generation with Discriminative Information [63.851085173614]
Differentially private data generation is a promising solution to the data privacy challenge.
Existing private generative models are struggling with the utility of synthetic samples.
We introduce a simple yet effective method that greatly improves the sample utility of state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-07T10:02:55Z) - Heterogeneous Target Speech Separation [52.05046029743995]
We introduce a new paradigm for single-channel target source separation where the sources of interest can be distinguished using non-mutually exclusive concepts.
Our proposed heterogeneous separation framework can seamlessly leverage datasets with large distribution shifts.
arXiv Detail & Related papers (2022-04-07T17:14:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.