Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory
- URL: http://arxiv.org/abs/2006.16800v1
- Date: Mon, 29 Jun 2020 08:35:49 GMT
- Title: Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory
- Authors: Antonio Carta, Alessandro Sperduti, Davide Bacciu
- Abstract summary: We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
- Score: 79.42778415729475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The effectiveness of recurrent neural networks can be largely influenced by
their ability to store into their dynamical memory information extracted from
input sequences at different frequencies and timescales. Such a feature can be
introduced into a neural architecture by an appropriate modularization of the
dynamic memory. In this paper we propose a novel incrementally trained
recurrent architecture targeting explicitly multi-scale learning. First, we
show how to extend the architecture of a simple RNN by separating its hidden
state into different modules, each subsampling the network hidden activations
at different frequencies. Then, we discuss a training algorithm where new
modules are iteratively added to the model to learn progressively longer
dependencies. Each new module works at a slower frequency than the previous
ones and it is initialized to encode the subsampled sequence of hidden
activations. Experimental results on synthetic and real-world datasets on
speech recognition and handwritten characters show that the modular
architecture and the incremental training algorithm improve the ability of
recurrent neural networks to capture long-term dependencies.
Related papers
- Modular Growth of Hierarchical Networks: Efficient, General, and Robust Curriculum Learning [0.0]
We show that for a given classical, non-modular recurrent neural network (RNN), an equivalent modular network will perform better across multiple metrics.
We demonstrate that the inductive bias introduced by the modular topology is strong enough for the network to perform well even when the connectivity within modules is fixed.
Our findings suggest that gradual modular growth of RNNs could provide advantages for learning increasingly complex tasks on evolutionary timescales.
arXiv Detail & Related papers (2024-06-10T13:44:07Z) - Harnessing Neural Unit Dynamics for Effective and Scalable Class-Incremental Learning [38.09011520275557]
Class-incremental learning (CIL) aims to train a model to learn new classes from non-stationary data streams without forgetting old ones.
We propose a new kind of connectionist model by tailoring neural unit dynamics that adapt the behavior of neural networks for CIL.
arXiv Detail & Related papers (2024-06-04T15:47:03Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Oscillatory Fourier Neural Network: A Compact and Efficient Architecture
for Sequential Processing [16.69710555668727]
We propose a novel neuron model that has cosine activation with a time varying component for sequential processing.
The proposed neuron provides an efficient building block for projecting sequential inputs into spectral domain.
Applying the proposed model to sentiment analysis on IMDB dataset reaches 89.4% test accuracy within 5 epochs.
arXiv Detail & Related papers (2021-09-14T19:08:07Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Separation of Memory and Processing in Dual Recurrent Neural Networks [0.0]
We explore a neural network architecture that stacks a recurrent layer and a feedforward layer that is also connected to the input.
When noise is introduced into the activation function of the recurrent units, these neurons are forced into a binary activation regime that makes the networks behave much as finite automata.
arXiv Detail & Related papers (2020-05-17T11:38:42Z) - Encoding-based Memory Modules for Recurrent Neural Networks [79.42778415729475]
We study the memorization subtask from the point of view of the design and training of recurrent neural networks.
We propose a new model, the Linear Memory Network, which features an encoding-based memorization component built with a linear autoencoder for sequences.
arXiv Detail & Related papers (2020-01-31T11:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.