Continual Competitive Memory: A Neural System for Online Task-Free
Lifelong Learning
- URL: http://arxiv.org/abs/2106.13300v1
- Date: Thu, 24 Jun 2021 20:12:17 GMT
- Title: Continual Competitive Memory: A Neural System for Online Task-Free
Lifelong Learning
- Authors: Alexander G. Ororbia
- Abstract summary: We propose a novel form of unsupervised learning, continual competitive memory ( CCM)
The resulting neural system is shown to offer an effective approach for combating catastrophic forgetting in online continual classification problems.
We demonstrate that the proposed CCM system not only outperforms other competitive learning neural models but also yields performance that is competitive with several modern, state-of-the-art lifelong learning approaches.
- Score: 91.3755431537592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article, we propose a novel form of unsupervised learning, continual
competitive memory (CCM), as well as a computational framework to unify related
neural models that operate under the principles of competition. The resulting
neural system is shown to offer an effective approach for combating
catastrophic forgetting in online continual classification problems. We
demonstrate that the proposed CCM system not only outperforms other competitive
learning neural models but also yields performance that is competitive with
several modern, state-of-the-art lifelong learning approaches on benchmarks
such as Split MNIST and Split NotMNIST. CCM yields a promising path forward for
acquiring representations that are robust to interference from data streams,
especially when the task is unknown to the model and must be inferred without
external guidance.
Related papers
- Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Safe Multi-agent Learning via Trapping Regions [89.24858306636816]
We apply the concept of trapping regions, known from qualitative theory of dynamical systems, to create safety sets in the joint strategy space for decentralized learning.
We propose a binary partitioning algorithm for verification that candidate sets form trapping regions in systems with known learning dynamics, and a sampling algorithm for scenarios where learning dynamics are not known.
arXiv Detail & Related papers (2023-02-27T14:47:52Z) - Composite FORCE learning of chaotic echo state networks for time-series
prediction [7.650966670809372]
This paper proposes a composite FORCE learning method to train ESNs whose initial activity is spontaneously chaotic.
numerical results have shown that it significantly improves learning and prediction performances compared with existing methods.
arXiv Detail & Related papers (2022-07-06T03:44:09Z) - Mixture-of-Variational-Experts for Continual Learning [0.0]
We propose an optimality principle that facilitates a trade-off between learning and forgetting.
We propose a neural network layer for continual learning, called Mixture-of-Variational-Experts (MoVE)
Our experiments on variants of the MNIST and CIFAR10 datasets demonstrate the competitive performance of MoVE layers.
arXiv Detail & Related papers (2021-10-25T06:32:06Z) - Improving Music Performance Assessment with Contrastive Learning [78.8942067357231]
This study investigates contrastive learning as a potential method to improve existing MPA systems.
We introduce a weighted contrastive loss suitable for regression tasks applied to a convolutional neural network.
Our results show that contrastive-based methods are able to match and exceed SoTA performance for MPA regression tasks.
arXiv Detail & Related papers (2021-08-03T19:24:25Z) - Nested Mixture of Experts: Cooperative and Competitive Learning of
Hybrid Dynamical System [2.055949720959582]
We propose a nested mixture of experts (NMOE) for representing and learning hybrid dynamical systems.
An NMOE combines both white-box and black-box models while optimizing bias-variance trade-off.
An NMOE provides a structured method for incorporating various types of prior knowledge by training the associative experts cooperatively or competitively.
arXiv Detail & Related papers (2020-11-20T19:36:45Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.