Reservoir memory machines
- URL: http://arxiv.org/abs/2003.04793v1
- Date: Wed, 12 Feb 2020 01:45:00 GMT
- Title: Reservoir memory machines
- Authors: Benjamin Paassen and Alexander Schulz
- Abstract summary: We propose reservoir memory machines, which are able to solve some of the benchmark tests for Neural Turing Machines.
Our model can also be seen as an extension of echo state networks with an external memory, enabling arbitrarily long storage without interference.
- Score: 79.79659145328856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, Neural Turing Machines have gathered attention by joining
the flexibility of neural networks with the computational capabilities of
Turing machines. However, Neural Turing Machines are notoriously hard to train,
which limits their applicability. We propose reservoir memory machines, which
are still able to solve some of the benchmark tests for Neural Turing Machines,
but are much faster to train, requiring only an alignment algorithm and linear
regression. Our model can also be seen as an extension of echo state networks
with an external memory, enabling arbitrarily long storage without
interference.
Related papers
- Token Turing Machines [53.22971546637947]
Token Turing Machines (TTM) is a sequential, autoregressive Transformer model with memory for real-world sequential visual understanding.
Our model is inspired by the seminal Neural Turing Machine, and has an external memory consisting of a set of tokens which summarise the previous history.
arXiv Detail & Related papers (2022-11-16T18:59:18Z) - Reservoir Stack Machines [77.12475691708838]
Memory-augmented neural networks equip a recurrent neural network with an explicit memory to support tasks that require information storage.
We introduce the reservoir stack machine, a model which can provably recognize all deterministic context-free languages.
Our results show that the reservoir stack machine achieves zero error, even on test sequences longer than the training data.
arXiv Detail & Related papers (2021-05-04T16:50:40Z) - Neurocoder: Learning General-Purpose Computation Using Stored Neural
Programs [64.56890245622822]
Neurocoder is an entirely new class of general-purpose conditional computational machines.
It "codes" itself in a data-responsive way by composing relevant programs from a set of shareable, modular programs.
We show new capacity to learn modular programs, handle severe pattern shifts and remember old programs as new ones are learnt.
arXiv Detail & Related papers (2020-09-24T01:39:16Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - A provably stable neural network Turing Machine [13.615420026818038]
We introduce a neural stack architecture, including a differentiable parametrized stack operator that approximates stack push and pop operations.
Using the neural stack with a recurrent neural network, we introduce a neural network Pushdown Automaton (nnPDA) and prove that nnPDA with finite/bounded neurons and time can simulate any PDA.
We prove that differentiable nnTM with bounded neurons can simulate Turing Machine (TM) in real-time.
arXiv Detail & Related papers (2020-06-05T19:45:49Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - On the computational power and complexity of Spiking Neural Networks [0.0]
We introduce spiking neural networks as a machine model where---in contrast to the familiar Turing machine---information and the manipulation thereof are co-located in the machine.
We introduce canonical problems, define hierarchies of complexity classes and provide some first completeness results.
arXiv Detail & Related papers (2020-01-23T10:40:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.