Rehearsal-Free Continual Federated Learning with Synergistic Regularization
- URL: http://arxiv.org/abs/2412.13779v1
- Date: Wed, 18 Dec 2024 12:16:41 GMT
- Title: Rehearsal-Free Continual Federated Learning with Synergistic Regularization
- Authors: Yichen Li, Yuying Wang, Tianzhe Xiao, Haozhao Wang, Yining Qi, Ruixuan Li,
- Abstract summary: Continual Federated Learning (CFL) allows distributed devices to collaboratively learn novel concepts from continuously shifting training data.<n>We propose a simple yet effective regularization algorithm for CFL named FedSSI, which tailors the synaptic intelligence for the CFL with heterogeneous data settings.
- Score: 14.258111055761479
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual Federated Learning (CFL) allows distributed devices to collaboratively learn novel concepts from continuously shifting training data while avoiding knowledge forgetting of previously seen tasks. To tackle this challenge, most current CFL approaches rely on extensive rehearsal of previous data. Despite effectiveness, rehearsal comes at a cost to memory, and it may also violate data privacy. Considering these, we seek to apply regularization techniques to CFL by considering their cost-efficient properties that do not require sample caching or rehearsal. Specifically, we first apply traditional regularization techniques to CFL and observe that existing regularization techniques, especially synaptic intelligence, can achieve promising results under homogeneous data distribution but fail when the data is heterogeneous. Based on this observation, we propose a simple yet effective regularization algorithm for CFL named FedSSI, which tailors the synaptic intelligence for the CFL with heterogeneous data settings. FedSSI can not only reduce computational overhead without rehearsal but also address the data heterogeneity issue. Extensive experiments show that FedSSI achieves superior performance compared to state-of-the-art methods.
Related papers
- FedBiP: Heterogeneous One-Shot Federated Learning with Personalized Latent Diffusion Models [37.76576626976729]
One-Shot Federated Learning (OSFL), a special decentralized machine learning paradigm, has recently gained significant attention.
Current methods face challenges due to client data heterogeneity and limited data quantity when applied to real-world OSFL systems.
We propose Federated Bi-Level Personalization (FedBiP), which personalizes the pretrained LDM at both instance-level and concept-level.
arXiv Detail & Related papers (2024-10-07T07:45:18Z) - Contrastive Federated Learning with Tabular Data Silos [14.430230014234189]
We propose Contrastive Federated Learning with Tabular Data Silos (CFL) as a solution for learning from vertical partitioned data silos.
CFL offers a solution for data silos with sample misalignment without the need for sharing original or representative data to maintain privacy.
arXiv Detail & Related papers (2024-09-10T00:24:59Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Federated Learning with Reduced Information Leakage and Computation [17.069452700698047]
Federated learning (FL) is a distributed learning paradigm that allows multiple decentralized clients to collaboratively learn a common model without sharing local data.
This paper introduces Upcycled-FL, a strategy that applies first-order approximation at every even round of model update.
Under this strategy, half of the FL updates incur no information leakage and require much less computational and transmission costs.
arXiv Detail & Related papers (2023-10-10T06:22:06Z) - CyclicFL: A Cyclic Model Pre-Training Approach to Efficient Federated Learning [33.250038477336425]
Federated learning (FL) has been proposed to enable distributed learning on Artificial Intelligence Internet of Things (AIoT) devices with guarantees of high-level data privacy.
Existing FL methods suffer from both slow convergence and poor accuracy, especially in non-IID scenarios.
We propose a novel method named CyclicFL, which can quickly derive effective initial models to guide the SGD processes.
arXiv Detail & Related papers (2023-01-28T13:28:34Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - MaxMatch: Semi-Supervised Learning with Worst-Case Consistency [149.03760479533855]
We propose a worst-case consistency regularization technique for semi-supervised learning (SSL)
We present a generalization bound for SSL consisting of the empirical loss terms observed on labeled and unlabeled training data separately.
Motivated by this bound, we derive an SSL objective that minimizes the largest inconsistency between an original unlabeled sample and its multiple augmented variants.
arXiv Detail & Related papers (2022-09-26T12:04:49Z) - Low-Latency Cooperative Spectrum Sensing via Truncated Vertical
Federated Learning [51.51440623636274]
We propose a vertical federated learning (VFL) framework to exploit the distributed features across multiple secondary users (SUs) without compromising data privacy.
To accelerate the training process, we propose a truncated vertical federated learning (T-VFL) algorithm.
The convergence performance of T-VFL is provided via mathematical analysis and justified by simulation results.
arXiv Detail & Related papers (2022-08-07T10:39:27Z) - Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free
Replay [52.251188477192336]
Few-shot class-incremental learning (FSCIL) has been proposed aiming to enable a deep learning system to incrementally learn new classes with limited data.
We show through empirical results that adopting the data replay is surprisingly favorable.
We propose using data-free replay that can synthesize data by a generator without accessing real data.
arXiv Detail & Related papers (2022-07-22T17:30:51Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Towards Heterogeneous Clients with Elastic Federated Learning [45.2715985913761]
Federated learning involves training machine learning models over devices or data silos, such as edge processors or data warehouses, while keeping the data local.
We propose Elastic Federated Learning (EFL), an unbiased algorithm to tackle the heterogeneity in the system.
It is an efficient and effective algorithm that compresses both upstream and downstream communications.
arXiv Detail & Related papers (2021-06-17T12:30:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.