Unleashing the Power of Continual Learning on Non-Centralized Devices: A Survey
- URL: http://arxiv.org/abs/2412.13840v1
- Date: Wed, 18 Dec 2024 13:33:28 GMT
- Title: Unleashing the Power of Continual Learning on Non-Centralized Devices: A Survey
- Authors: Yichen Li, Haozhao Wang, Wenchao Xu, Tianzhe Xiao, Hong Liu, Minzhu Tu, Yuying Wang, Xin Yang, Rui Zhang, Shui Yu, Song Guo, Ruixuan Li,
- Abstract summary: Non- Continual Learning (NCCL) has become an emerging paradigm for enabling distributed devices to handle streaming data from a joint non-stationary environment.
This survey focuses on the development of the non-centralized continual learning algorithms and the real-world deployment across distributed devices.
- Score: 37.07938402225207
- License:
- Abstract: Non-Centralized Continual Learning (NCCL) has become an emerging paradigm for enabling distributed devices such as vehicles and servers to handle streaming data from a joint non-stationary environment. To achieve high reliability and scalability in deploying this paradigm in distributed systems, it is essential to conquer challenges stemming from both spatial and temporal dimensions, manifesting as distribution shifts, catastrophic forgetting, heterogeneity, and privacy issues. This survey focuses on a comprehensive examination of the development of the non-centralized continual learning algorithms and the real-world deployment across distributed devices. We begin with an introduction to the background and fundamentals of non-centralized learning and continual learning. Then, we review existing solutions from three levels to represent how existing techniques alleviate the catastrophic forgetting and distribution shift. Additionally, we delve into the various types of heterogeneity issues, security, and privacy attributes, as well as real-world applications across three prevalent scenarios. Furthermore, we establish a large-scale benchmark to revisit this problem and analyze the performance of the state-of-the-art NCCL approaches. Finally, we discuss the important challenges and future research directions in NCCL.
Related papers
- Federated Continual Learning: Concepts, Challenges, and Solutions [3.379574469735166]
Federated Continual Learning (FCL) has emerged as a robust solution for collaborative model training in dynamic environments.
This survey focuses on key challenges such as heterogeneity, model stability, communication overhead, and privacy preservation.
arXiv Detail & Related papers (2025-02-10T21:51:02Z) - Distributed Learning and Inference Systems: A Networking Perspective [0.0]
This work proposes a novel framework, Data and Dynamics-Aware Inference and Training Networks (DA-ITN)
The different components of DA-ITN and their functions are explored, and the associated challenges and research areas are highlighted.
arXiv Detail & Related papers (2025-01-09T15:48:29Z) - Online Continual Learning: A Systematic Literature Review of Approaches, Challenges, and Benchmarks [1.3631535881390204]
Online Continual Learning (OCL) is a critical area in machine learning.
This study conducts the first comprehensive Systematic Literature Review on OCL.
arXiv Detail & Related papers (2025-01-09T01:03:14Z) - Position Paper: Assessing Robustness, Privacy, and Fairness in Federated
Learning Integrated with Foundation Models [39.86957940261993]
Integration of Foundation Models (FMs) into Federated Learning (FL) introduces novel issues in terms of robustness, privacy, and fairness.
We analyze the trade-offs involved, uncover the threats and issues introduced by this integration, and propose a set of criteria and strategies for navigating these challenges.
arXiv Detail & Related papers (2024-02-02T19:26:00Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Learning from Heterogeneous Data Based on Social Interactions over
Graphs [58.34060409467834]
This work proposes a decentralized architecture, where individual agents aim at solving a classification problem while observing streaming features of different dimensions.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
arXiv Detail & Related papers (2021-12-17T12:47:18Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Federated Learning for Intrusion Detection System: Concepts, Challenges
and Future Directions [0.20236506875465865]
Intrusion detection systems play a significant role in ensuring security and privacy of smart devices.
The present paper aims to present an extensive and exhaustive review on the use of FL in intrusion detection system.
arXiv Detail & Related papers (2021-06-16T13:13:04Z) - Reinforcement Learning for Datacenter Congestion Control [50.225885814524304]
Successful congestion control algorithms can dramatically improve latency and overall network throughput.
Until today, no such learning-based algorithms have shown practical potential in this domain.
We devise an RL-based algorithm with the aim of generalizing to different configurations of real-world datacenter networks.
We show that this scheme outperforms alternative popular RL approaches, and generalizes to scenarios that were not seen during training.
arXiv Detail & Related papers (2021-02-18T13:49:28Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.