Neurocoder: Learning General-Purpose Computation Using Stored Neural
Programs
- URL: http://arxiv.org/abs/2009.11443v1
- Date: Thu, 24 Sep 2020 01:39:16 GMT
- Title: Neurocoder: Learning General-Purpose Computation Using Stored Neural
Programs
- Authors: Hung Le and Svetha Venkatesh
- Abstract summary: Neurocoder is an entirely new class of general-purpose conditional computational machines.
It "codes" itself in a data-responsive way by composing relevant programs from a set of shareable, modular programs.
We show new capacity to learn modular programs, handle severe pattern shifts and remember old programs as new ones are learnt.
- Score: 64.56890245622822
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Neural Networks are uniquely adroit at machine learning by
processing data through a network of artificial neurons. The inter-neuronal
connection weights represent the learnt Neural Program that instructs the
network on how to compute the data. However, without an external memory to
store Neural Programs, they are restricted to only one, overwriting learnt
programs when trained on new data. This is functionally equivalent to a
special-purpose computer. Here we design Neurocoder, an entirely new class of
general-purpose conditional computational machines in which the neural network
"codes" itself in a data-responsive way by composing relevant programs from a
set of shareable, modular programs. This can be considered analogous to
building Lego structures from simple Lego bricks. Notably, our bricks change
their shape through learning. External memory is used to create, store and
retrieve modular programs. Like today's stored-program computers, Neurocoder
can now access diverse programs to process different data. Unlike manually
crafted computer programs, Neurocoder creates programs through training.
Integrating Neurocoder into current neural architectures, we demonstrate new
capacity to learn modular programs, handle severe pattern shifts and remember
old programs as new ones are learnt, and show substantial performance
improvement in solving object recognition, playing video games and continual
learning tasks. Such integration with Neurocoder increases the computation
capability of any current neural network and endows it with entirely new
capacity to reuse simple programs to build complex ones. For the first time a
Neural Program is treated as a datum in memory, paving the ways for modular,
recursive and procedural neural programming.
Related papers
- Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - A Sparse Quantized Hopfield Network for Online-Continual Memory [0.0]
Nervous systems learn online where a stream of noisy data points are presented in a non-independent, identically distributed (non-i.i.d.) way.
Deep networks, on the other hand, typically use non-local learning algorithms and are trained in an offline, non-noisy, i.i.d. setting.
We implement this kind of model in a novel neural network called the Sparse Quantized Hopfield Network (SQHN)
arXiv Detail & Related papers (2023-07-27T17:46:17Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - A Neural Lambda Calculus: Neurosymbolic AI meets the foundations of
computing and functional programming [0.0]
We will analyze the ability of neural networks to learn how to execute programs as a whole.
We will introduce the use of integrated neural learning and calculi formalization.
arXiv Detail & Related papers (2023-04-18T20:30:16Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - A Robust Learning Rule for Soft-Bounded Memristive Synapses Competitive
with Supervised Learning in Standard Spiking Neural Networks [0.0]
A view in theoretical neuroscience sees the brain as a function-computing device.
Being able to approximate functions is a fundamental axiom to build upon for future brain research.
In this work we apply a novel supervised learning algorithm - based on controlling niobium-doped strontium titanate memristive synapses - to learning non-trivial multidimensional functions.
arXiv Detail & Related papers (2022-04-12T10:21:22Z) - Memory and attention in deep learning [19.70919701635945]
Memory construction for machine is inevitable.
Recent progresses on modeling memory in deep learning have revolved around external memory constructions.
The aim of this thesis is to advance the understanding on memory and attention in deep learning.
arXiv Detail & Related papers (2021-07-03T09:21:13Z) - Applications of Deep Neural Networks with Keras [0.0]
Deep learning allows a neural network to learn hierarchies of information in a way that is like the function of the human brain.
This course will introduce the student to classic neural network structures, Conversa Neural Networks (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Neural Networks (GRU), General Adrial Networks (GAN)
arXiv Detail & Related papers (2020-09-11T22:09:10Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.