Dynamic Heterogeneous Federated Learning with Multi-Level Prototypes
- URL: http://arxiv.org/abs/2312.09881v1
- Date: Fri, 15 Dec 2023 15:28:25 GMT
- Title: Dynamic Heterogeneous Federated Learning with Multi-Level Prototypes
- Authors: Shunxin Guo, Hongsong Wang, Xin Geng
- Abstract summary: We study the new task, i.e., Dynamic Heterogeneous Federated Learning (DHFL), which addresses the practical scenario where heterogeneous data distributions exist among different clients and dynamic tasks within the client.
To mitigate concept drift, we construct prototypes and semantic prototypes to provide fruitful generalization knowledge and ensure the continuity of prototype spaces.
Extensive experiments show that the proposed method achieves state-of-the-art performance in various settings.
- Score: 45.13348636579529
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning shows promise as a privacy-preserving collaborative
learning technique. Existing heterogeneous federated learning mainly focuses on
skewing the label distribution across clients. However, most approaches suffer
from catastrophic forgetting and concept drift, mainly when the global
distribution of all classes is extremely unbalanced and the data distribution
of the client dynamically evolves over time. In this paper, we study the new
task, i.e., Dynamic Heterogeneous Federated Learning (DHFL), which addresses
the practical scenario where heterogeneous data distributions exist among
different clients and dynamic tasks within the client. Accordingly, we propose
a novel federated learning framework named Federated Multi-Level Prototypes
(FedMLP) and design federated multi-level regularizations. To mitigate concept
drift, we construct prototypes and semantic prototypes to provide fruitful
generalization knowledge and ensure the continuity of prototype spaces. To
maintain the model stability and consistency of convergence, three
regularizations are introduced as training losses, i.e., prototype-based
regularization, semantic prototype-based regularization, and federated
inter-task regularization. Extensive experiments show that the proposed method
achieves state-of-the-art performance in various settings.
Related papers
- STHFL: Spatio-Temporal Heterogeneous Federated Learning [39.32313754519315]
Federated learning is a new framework that protects data privacy and allows multiple devices to cooperate in training machine learning models.
Previous studies have proposed multiple approaches to eliminate the challenges posed by non-iid data and inter-domain issues.
We propose a novel setting named textbfSpatio-temporal Heterogeneity Federated Learning (STHFL). Specially, the Global-Local Dynamic Prototype (GLDP) framework is designed for STHFL.
arXiv Detail & Related papers (2025-01-10T08:15:02Z) - FedSA: A Unified Representation Learning via Semantic Anchors for Prototype-based Federated Learning [4.244188591221394]
We propose a novel framework named Federated Learning via Semantic Anchors (FedSA) to decouple the generation of prototypes from local representation learning.
FedSA significantly outperforms existing prototype-based FL methods on various classification tasks.
arXiv Detail & Related papers (2025-01-09T16:10:03Z) - Addressing Skewed Heterogeneity via Federated Prototype Rectification with Personalization [35.48757125452761]
Federated learning is an efficient framework designed to facilitate collaborative model training across multiple distributed devices.
A significant challenge of federated learning is data-level heterogeneity, i.e., skewed or long-tailed distribution of private data.
We propose a novel Federated Prototype Rectification with Personalization which consists of two parts: Federated Personalization and Federated Prototype Rectification.
arXiv Detail & Related papers (2024-08-15T06:26:46Z) - Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models [83.02797560769285]
Data-Free Meta-Learning (DFML) aims to derive knowledge from a collection of pre-trained models without accessing their original data.
Current methods often overlook the heterogeneity among pre-trained models, which leads to performance degradation due to task conflicts.
arXiv Detail & Related papers (2024-05-26T13:11:55Z) - Adaptive Test-Time Personalization for Federated Learning [51.25437606915392]
We introduce a novel setting called test-time personalized federated learning (TTPFL)
In TTPFL, clients locally adapt a global model in an unsupervised way without relying on any labeled data during test-time.
We propose a novel algorithm called ATP to adaptively learn the adaptation rates for each module in the model from distribution shifts among source domains.
arXiv Detail & Related papers (2023-10-28T20:42:47Z) - Learning Invariant Molecular Representation in Latent Discrete Space [52.13724532622099]
We propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts.
Our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts.
arXiv Detail & Related papers (2023-10-22T04:06:44Z) - Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - Prototype Helps Federated Learning: Towards Faster Convergence [38.517903009319994]
Federated learning (FL) is a distributed machine learning technique in which multiple clients cooperate to train a shared model without exchanging their raw data.
In this paper, a prototype-based federated learning framework is proposed, which can achieve better inference performance with only a few changes to the last global iteration of the typical federated learning process.
arXiv Detail & Related papers (2023-03-22T04:06:29Z) - FedCL: Federated Multi-Phase Curriculum Learning to Synchronously
Correlate User Heterogeneity [17.532659808426605]
Federated Learning (FL) is a decentralized learning method used to train machine learning algorithms.
In FL, a global model iteratively collects the parameters of local models without accessing their local data.
We propose an active and synchronous correlation approach to address the challenge of user heterogeneity in FL.
arXiv Detail & Related papers (2022-11-14T10:06:41Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.