Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps
- URL: http://arxiv.org/abs/2402.12465v1
- Date: Mon, 19 Feb 2024 19:11:22 GMT
- Title: Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps
- Authors: Hitesh Vaidya, Travis Desell, Ankur Mali, Alexander Ororbia
- Abstract summary: Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
- Score: 56.827895559823126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An intelligent system capable of continual learning is one that can process
and extract knowledge from potentially infinitely long streams of pattern
vectors. The major challenge that makes crafting such a system difficult is
known as catastrophic forgetting - an agent, such as one based on artificial
neural networks (ANNs), struggles to retain previously acquired knowledge when
learning from new samples. Furthermore, ensuring that knowledge is preserved
for previous tasks becomes more challenging when input is not supplemented with
task boundary information. Although forgetting in the context of ANNs has been
studied extensively, there still exists far less work investigating it in terms
of unsupervised architectures such as the venerable self-organizing map (SOM),
a neural model often used in clustering and dimensionality reduction. While the
internal mechanisms of SOMs could, in principle, yield sparse representations
that improve memory retention, we observe that, when a fixed-size SOM processes
continuous data streams, it experiences concept drift. In light of this, we
propose a generalization of the SOM, the continual SOM (CSOM), which is capable
of online unsupervised learning under a low memory budget. Our results, on
benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a
two times increase in accuracy, and CIFAR-10 demonstrates a state-of-the-art
result when tested on (online) unsupervised class incremental learning setting.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Unsupervised Continual Anomaly Detection with Contrastively-learned
Prompt [80.43623986759691]
We introduce a novel Unsupervised Continual Anomaly Detection framework called UCAD.
The framework equips the UAD with continual learning capability through contrastively-learned prompts.
We conduct comprehensive experiments and set the benchmark on unsupervised continual anomaly detection and segmentation.
arXiv Detail & Related papers (2024-01-02T03:37:11Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Dendritic Self-Organizing Maps for Continual Learning [0.0]
We propose a novel algorithm inspired by biological neurons, termed Dendritic-Self-Organizing Map (DendSOM)
DendSOM consists of a single layer of SOMs, which extract patterns from specific regions of the input space.
It outperforms classical SOMs and several state-of-the-art continual learning algorithms on benchmark datasets.
arXiv Detail & Related papers (2021-10-18T14:47:19Z) - Fast & Slow Learning: Incorporating Synthetic Gradients in Neural Memory
Controllers [41.59845953349713]
We propose to decouple the learning process of the NMN controllers to allow them to achieve flexible, rapid adaptation in the presence of new information.
This trait is highly beneficial for meta-learning tasks where the memory controllers must quickly grasp abstract concepts in the target domain, and adapt stored knowledge.
arXiv Detail & Related papers (2020-11-10T22:44:27Z) - Neuromodulated Neural Architectures with Local Error Signals for
Memory-Constrained Online Continual Learning [4.2903672492917755]
We develop a biologically-inspired light weight neural network architecture that incorporates local learning and neuromodulation.
We demonstrate the efficacy of our approach on both single task and continual learning setting.
arXiv Detail & Related papers (2020-07-16T07:41:23Z) - Triple Memory Networks: a Brain-Inspired Method for Continual Learning [35.40452724755021]
A neural network adjusts its parameters when learning a new task, but then fails to conduct the old tasks well.
The brain has a powerful ability to continually learn new experience without catastrophic interference.
Inspired by such brain strategy, we propose a novel approach named triple memory networks (TMNs) for continual learning.
arXiv Detail & Related papers (2020-03-06T11:35:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.