Dendritic Self-Organizing Maps for Continual Learning
- URL: http://arxiv.org/abs/2110.13611v1
- Date: Mon, 18 Oct 2021 14:47:19 GMT
- Title: Dendritic Self-Organizing Maps for Continual Learning
- Authors: Kosmas Pinitas, Spyridon Chavlis, Panayiota Poirazi
- Abstract summary: We propose a novel algorithm inspired by biological neurons, termed Dendritic-Self-Organizing Map (DendSOM)
DendSOM consists of a single layer of SOMs, which extract patterns from specific regions of the input space.
It outperforms classical SOMs and several state-of-the-art continual learning algorithms on benchmark datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current deep learning architectures show remarkable performance when trained
in large-scale, controlled datasets. However, the predictive ability of these
architectures significantly decreases when learning new classes incrementally.
This is due to their inclination to forget the knowledge acquired from
previously seen data, a phenomenon termed catastrophic-forgetting. On the other
hand, Self-Organizing Maps (SOMs) can model the input space utilizing
constrained k-means and thus maintain past knowledge. Here, we propose a novel
algorithm inspired by biological neurons, termed Dendritic-Self-Organizing Map
(DendSOM). DendSOM consists of a single layer of SOMs, which extract patterns
from specific regions of the input space accompanied by a set of hit matrices,
one per SOM, which estimate the association between units and labels. The
best-matching unit of an input pattern is selected using the maximum cosine
similarity rule, while the point-wise mutual information is employed for class
inference. DendSOM performs unsupervised feature extraction as it does not use
labels for targeted updating of the weights. It outperforms classical SOMs and
several state-of-the-art continual learning algorithms on benchmark datasets,
such as the Split-MNIST and Split-CIFAR-10. We propose that the incorporation
of neuronal properties in SOMs may help remedy catastrophic forgetting.
Related papers
- Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - Improvements to Supervised EM Learning of Shared Kernel Models by
Feature Space Partitioning [0.0]
This paper addresses the lack of rigour in the derivation of the EM training algorithm and the computational complexity of the technique.
We first present a detailed derivation of EM for the Gaussian shared kernel model PRBF classifier.
To reduce complexity of the resulting SKEM algorithm, we partition the feature space into $R$ non-overlapping subsets of variables.
arXiv Detail & Related papers (2022-05-31T09:18:58Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Learning Self-Expression Metrics for Scalable and Inductive Subspace
Clustering [5.587290026368626]
Subspace clustering has established itself as a state-of-the-art approach to clustering high-dimensional data.
We propose a novel metric learning approach to learn instead a subspace affinity function using a siamese neural network architecture.
Our model benefits from a constant number of parameters and a constant-size memory footprint, allowing it to scale to considerably larger datasets.
arXiv Detail & Related papers (2020-09-27T15:40:12Z) - Improving Self-Organizing Maps with Unsupervised Feature Extraction [0.0]
The Self-Organizing Map (SOM) is a brain-inspired neural model that is very promising for unsupervised learning.
We propose in this work to improve the SOM performance by using extracted features instead of raw data.
We improve the SOM classification by +6.09% and reach state-of-the-art performance on unsupervised image classification.
arXiv Detail & Related papers (2020-09-04T13:19:24Z) - Supervised Topological Maps [0.76146285961466]
Controlling the internal representation space of a neural network is a desirable feature because it allows to generate new data in a supervised manner.
We will show how this can be achieved while building a low-dimensional mapping of the input stream, by deriving a generalized algorithm starting from Self Organizing Maps (SOMs)
arXiv Detail & Related papers (2020-08-14T14:30:16Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.