Continual Learning with Neuromorphic Computing: Foundations, Methods, and Emerging Applications
- URL: http://arxiv.org/abs/2410.09218v3
- Date: Sat, 19 Jul 2025 03:19:37 GMT
- Title: Continual Learning with Neuromorphic Computing: Foundations, Methods, and Emerging Applications
- Authors: Mishal Fatima Minhas, Rachmad Vidya Wicaksana Putra, Falah Awwad, Osman Hasan, Muhammad Shafique,
- Abstract summary: Neuromorphic Continual Learning (NCL) appears as an emerging solution, by leveraging the principles of Spiking Neural Networks (SNNs)<n>This survey covers several hybrid approaches that combine supervised and unsupervised learning paradigms.<n>It also covers optimization techniques including SNN operations reduction, weight quantization, and knowledge distillation.
- Score: 5.213243471774097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The challenging deployment of compute- and memory-intensive methods from Deep Neural Network (DNN)-based Continual Learning (CL) underscores the critical need for a paradigm shift towards more efficient approaches. Neuromorphic Continual Learning (NCL) appears as an emerging solution, by leveraging the principles of Spiking Neural Networks (SNNs) which enable efficient CL algorithms executed in dynamically-changed environments with resource-constrained computing systems. Motivated by the need for a holistic study of NCL, in this survey, we first provide a detailed background on CL, encompassing the desiderata, settings, metrics, scenario taxonomy, Online Continual Learning (OCL) paradigm, recent DNN-based methods to address catastrophic forgetting (CF). Then, we analyze these methods considering CL desiderata, computational and memory costs, as well as network complexity, hence emphasizing the need for energy-efficient CL. Afterward, we provide background of low-power neuromorphic systems including encoding techniques, neuronal dynamics, network architectures, learning rules, hardware processors, software and hardware frameworks, datasets, benchmarks, and evaluation metrics. Then, this survey comprehensively reviews and analyzes state-of-the-art in NCL. The key ideas, implementation frameworks, and performance assessments are also provided. This survey covers several hybrid approaches that combine supervised and unsupervised learning paradigms. It also covers optimization techniques including SNN operations reduction, weight quantization, and knowledge distillation. Then, this survey discusses the progress of real-world NCL applications. Finally, this paper provides a future perspective on the open research challenges for NCL, since the purpose of this study is to be useful for the wider neuromorphic AI research community and to inspire future research in bio-plausible OCL.
Related papers
- Discrete Tokenization for Multimodal LLMs: A Comprehensive Survey [69.45421620616486]
This work presents the first structured taxonomy and analysis of discrete tokenization methods designed for large language models (LLMs)<n>We categorize 8 representative VQ variants that span classical and modern paradigms and analyze their algorithmic principles, training dynamics, and integration challenges with LLM pipelines.<n>We identify key challenges including codebook collapse, unstable gradient estimation, and modality-specific encoding constraints.
arXiv Detail & Related papers (2025-07-21T10:52:14Z) - Semi-parametric Memory Consolidation: Towards Brain-like Deep Continual Learning [59.35015431695172]
We propose a novel biomimetic continual learning framework that integrates semi-parametric memory and the wake-sleep consolidation mechanism.
For the first time, our method enables deep neural networks to retain high performance on novel tasks while maintaining prior knowledge in real-world challenging continual learning scenarios.
arXiv Detail & Related papers (2025-04-20T19:53:13Z) - Three-Factor Learning in Spiking Neural Networks: An Overview of Methods and Trends from a Machine Learning Perspective [0.07499722271664144]
Three-factor learning rules in Spiking Neural Networks (SNNs) have emerged as a crucial extension to traditional Hebbian learning.
These mechanisms enhance biological plausibility and facilitate improved credit assignment in artificial neural systems.
arXiv Detail & Related papers (2025-04-06T08:10:16Z) - Online Continual Learning: A Systematic Literature Review of Approaches, Challenges, and Benchmarks [1.3631535881390204]
Online Continual Learning (OCL) is a critical area in machine learning.<n>This study conducts the first comprehensive Systematic Literature Review on OCL.
arXiv Detail & Related papers (2025-01-09T01:03:14Z) - Similarity-based context aware continual learning for spiking neural networks [12.259720271932661]
We propose a Similarity-based Context Aware Spiking Neural Network (SCA-SNN) continual learning algorithm.
Based on contextual similarity across tasks, the SCA-SNN model can adaptively reuse neurons from previous tasks that are beneficial for new tasks.
Our algorithm has the capability to adaptively select similar groups of neurons for related tasks, offering a promising approach to enhancing the biological interpretability of efficient continual learning.
arXiv Detail & Related papers (2024-10-28T09:38:57Z) - Continual Learning with Hebbian Plasticity in Sparse and Predictive Coding Networks: A Survey and Perspective [1.3986052523534573]
An emerging class of neuromorphic continual learning systems must learn to integrate new information on the fly.
This survey covers a number of recent works in the field of neuromorphic continual learning based on state-of-the-art Sparse and Predictive Coding technology.
It is hoped that this survey will contribute towards future research in the field of neuromorphic continual learning.
arXiv Detail & Related papers (2024-07-24T14:20:59Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - EchoSpike Predictive Plasticity: An Online Local Learning Rule for Spiking Neural Networks [4.644628459389789]
Spiking Neural Networks (SNNs) are attractive due to their potential in applications requiring low power and memory.
"EchoSpike Predictive Plasticity" (ESPP) learning rule is a pioneering online local learning rule.
ESPP represents a significant advancement in developing biologically plausible self-supervised learning models for neuromorphic computing at the edge.
arXiv Detail & Related papers (2024-05-22T20:20:43Z) - A Unified and General Framework for Continual Learning [58.72671755989431]
Continual Learning (CL) focuses on learning from dynamic and changing data distributions while retaining previously acquired knowledge.
Various methods have been developed to address the challenge of catastrophic forgetting, including regularization-based, Bayesian-based, and memory-replay-based techniques.
This research aims to bridge this gap by introducing a comprehensive and overarching framework that encompasses and reconciles these existing methodologies.
arXiv Detail & Related papers (2024-03-20T02:21:44Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - Metalearning Continual Learning Algorithms [42.710124929514066]
We propose Automated Continual Learning (ACL) to train self-referential neural networks to continual (meta)learning algorithms.<n>ACL encodes continual learning (CL) desiderata -- good performance on both old and new tasks -- into its metalearning objectives.<n>Our experiments demonstrate that ACL effectively resolves "in-context catastrophic forgetting," a problem that naive in-context learning algorithms suffer from.
arXiv Detail & Related papers (2023-12-01T01:25:04Z) - Adaptive Reorganization of Neural Pathways for Continual Learning with Spiking Neural Networks [9.889775504641925]
We propose a brain-inspired continual learning algorithm with adaptive reorganization of neural pathways.
The proposed model demonstrates consistent superiority in performance, energy consumption, and memory capacity on diverse continual learning tasks.
arXiv Detail & Related papers (2023-09-18T07:56:40Z) - Enhancing Efficient Continual Learning with Dynamic Structure
Development of Spiking Neural Networks [6.407825206595442]
Children possess the ability to learn multiple cognitive tasks sequentially.
Existing continual learning frameworks are usually applicable to Deep Neural Networks (DNNs)
We propose Dynamic Structure Development of Spiking Neural Networks (DSD-SNN) for efficient and adaptive continual learning.
arXiv Detail & Related papers (2023-08-09T07:36:40Z) - A Survey on In-context Learning [77.78614055956365]
In-context learning (ICL) has emerged as a new paradigm for natural language processing (NLP)
We first present a formal definition of ICL and clarify its correlation to related studies.
We then organize and discuss advanced techniques, including training strategies, prompt designing strategies, and related analysis.
arXiv Detail & Related papers (2022-12-31T15:57:09Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z) - Continual Learning for Recurrent Neural Networks: a Review and Empirical
Evaluation [12.27992745065497]
Continual Learning with recurrent neural networks could pave the way to a large number of applications where incoming data is non stationary.
We organize the literature on CL for sequential data processing by providing a categorization of the contributions and a review of the benchmarks.
We propose two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real-world applications.
arXiv Detail & Related papers (2021-03-12T19:25:28Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.