FedProK: Trustworthy Federated Class-Incremental Learning via Prototypical Feature Knowledge Transfer
- URL: http://arxiv.org/abs/2405.02685v1
- Date: Sat, 4 May 2024 14:57:09 GMT
- Title: FedProK: Trustworthy Federated Class-Incremental Learning via Prototypical Feature Knowledge Transfer
- Authors: Xin Gao, Xin Yang, Hao Yu, Yan Kang, Tianrui Li,
- Abstract summary: Federated Class-Incremental Learning (FCIL) focuses on continually transferring the previous knowledge to learn new classes in dynamic Federated Learning (FL)
We propose FedProK (Federated Prototypical Feature Knowledge Transfer), leveraging prototypical feature as a novel representation of knowledge to perform spatial-temporal knowledge transfer.
- Score: 22.713451501707908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Class-Incremental Learning (FCIL) focuses on continually transferring the previous knowledge to learn new classes in dynamic Federated Learning (FL). However, existing methods do not consider the trustworthiness of FCIL, i.e., improving continual utility, privacy, and efficiency simultaneously, which is greatly influenced by catastrophic forgetting and data heterogeneity among clients. To address this issue, we propose FedProK (Federated Prototypical Feature Knowledge Transfer), leveraging prototypical feature as a novel representation of knowledge to perform spatial-temporal knowledge transfer. Specifically, FedProK consists of two components: (1) feature translation procedure on the client side by temporal knowledge transfer from the learned classes and (2) prototypical knowledge fusion on the server side by spatial knowledge transfer among clients. Extensive experiments conducted in both synchronous and asynchronous settings demonstrate that our FedProK outperforms the other state-of-the-art methods in three perspectives of trustworthiness, validating its effectiveness in selectively transferring spatial-temporal knowledge.
Related papers
- Evidential Federated Learning for Skin Lesion Image Classification [9.112380151690862]
FedEvPrompt is a federated learning approach that integrates principles of evidential deep learning, prompt tuning, and knowledge distillation.
It is optimized within a round-based learning paradigm, where each round involves training local models followed by attention maps sharing with all federation clients.
In conclusion, FedEvPrompt offers a promising approach for federated learning, effectively addressing challenges such as data heterogeneity, imbalance, privacy preservation, and knowledge sharing.
arXiv Detail & Related papers (2024-11-15T09:34:28Z) - Personalized Federated Continual Learning via Multi-granularity Prompt [33.84680453375976]
We propose a novel concept called multi-granularity prompt, i.e., coarse-grained global prompt and fine-grained local prompt used to personalize the generalized representation.
By the exclusive fusion of coarse-grained knowledge, we achieve the transmission and refinement of common knowledge among clients, further enhancing the performance of personalization.
arXiv Detail & Related papers (2024-06-27T13:41:37Z) - Open Continual Feature Selection via Granular-Ball Knowledge Transfer [16.48797678104989]
We propose a novel framework for continual feature selection (CFS) in data preprocessing.
The proposed CFS method combines the strengths of continual learning (CL) with granular-ball computing (GBC)
We show that our method is superior in terms of both effectiveness and efficiency compared to state-of-the-art feature selection methods.
arXiv Detail & Related papers (2024-03-15T12:43:03Z) - Federated Continual Learning via Knowledge Fusion: A Survey [33.74289759536269]
Federated Continual Learning (FCL) is an emerging paradigm to address model learning in both federated and continual learning environments.
The key objective of FCL is to fuse heterogeneous knowledge from different clients and retain knowledge of previous tasks while learning on new ones.
In this work, we delineate federated learning and continual learning first and then discuss their integration, i.e., FCL, and particular FCL via knowledge fusion.
arXiv Detail & Related papers (2023-12-27T08:47:39Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - CLIP-based Synergistic Knowledge Transfer for Text-based Person
Retrieval [66.93563107820687]
We introduce a CLIP-based Synergistic Knowledge Transfer (CSKT) approach for Person Retrieval (TPR)
To explore the CLIP's knowledge on input side, we first propose a Bidirectional Prompts Transferring (BPT) module constructed by text-to-image and image-to-text bidirectional prompts and coupling projections.
CSKT outperforms the state-of-the-art approaches across three benchmark datasets when the training parameters merely account for 7.4% of the entire model.
arXiv Detail & Related papers (2023-09-18T05:38:49Z) - Selective Knowledge Sharing for Privacy-Preserving Federated
Distillation without A Good Teacher [52.2926020848095]
Federated learning is vulnerable to white-box attacks and struggles to adapt to heterogeneous clients.
This paper proposes a selective knowledge sharing mechanism for FD, termed Selective-FD.
arXiv Detail & Related papers (2023-04-04T12:04:19Z) - Communication-Efficient and Privacy-Preserving Feature-based Federated
Transfer Learning [11.758703301702012]
Federated learning has attracted growing interest as it preserves the clients' privacy.
Due to the limited radio spectrum, the communication efficiency of federated learning via wireless links is critical.
We propose a feature-based federated transfer learning as an innovative approach to reduce the uplink payload by more than five orders of magnitude.
arXiv Detail & Related papers (2022-09-12T16:48:52Z) - Long-tail Recognition via Compositional Knowledge Transfer [60.03764547406601]
We introduce a novel strategy for long-tail recognition that addresses the tail classes' few-shot problem.
Our objective is to transfer knowledge acquired from information-rich common classes to semantically similar, and yet data-hungry, rare classes.
Experiments show that our approach can achieve significant performance boosts on rare classes while maintaining robust common class performance.
arXiv Detail & Related papers (2021-12-13T15:48:59Z) - Federated Continual Learning with Weighted Inter-client Transfer [79.93004004545736]
We propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT)
FedWeIT decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients.
We validate our FedWeIT against existing federated learning and continual learning methods, and our model significantly outperforms them with a large reduction in the communication cost.
arXiv Detail & Related papers (2020-03-06T13:33:48Z) - Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model
Distillation Approach [55.83558520598304]
We propose a brand new solution to reuse experiences and transfer value functions among multiple students via model distillation.
We also describe how to design an efficient communication protocol to exploit heterogeneous knowledge.
Our proposed framework, namely Learning and Teaching Categorical Reinforcement, shows promising performance on stabilizing and accelerating learning progress.
arXiv Detail & Related papers (2020-02-06T11:31:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.