Learning to Control Rapidly Changing Synaptic Connections: An
Alternative Type of Memory in Sequence Processing Artificial Neural Networks
- URL: http://arxiv.org/abs/2211.09440v1
- Date: Thu, 17 Nov 2022 10:03:54 GMT
- Title: Learning to Control Rapidly Changing Synaptic Connections: An
Alternative Type of Memory in Sequence Processing Artificial Neural Networks
- Authors: Kazuki Irie, J\"urgen Schmidhuber
- Abstract summary: Generalising feedforward NNs to such RNNs is mathematically straightforward and natural, and even historical.
A lesser known alternative approach to storing short-term memory in "synaptic connections" yields another "natural" type of short-term memory in sequence processing NNs.
Fast Weight Programmers (FWPs) have seen a recent revival as generic sequence processors, achieving competitive performance across various tasks.
- Score: 9.605853974038936
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Short-term memory in standard, general-purpose, sequence-processing recurrent
neural networks (RNNs) is stored as activations of nodes or "neurons."
Generalising feedforward NNs to such RNNs is mathematically straightforward and
natural, and even historical: already in 1943, McCulloch and Pitts proposed
this as a surrogate to "synaptic modifications" (in effect, generalising the
Lenz-Ising model, the first non-sequence processing RNN architecture of the
1920s). A lesser known alternative approach to storing short-term memory in
"synaptic connections" -- by parameterising and controlling the dynamics of a
context-sensitive time-varying weight matrix through another NN -- yields
another "natural" type of short-term memory in sequence processing NNs: the
Fast Weight Programmers (FWPs) of the early 1990s. FWPs have seen a recent
revival as generic sequence processors, achieving competitive performance
across various tasks. They are formally closely related to the now popular
Transformers. Here we present them in the context of artificial NNs as an
abstraction of biological NNs -- a perspective that has not been stressed
enough in previous FWP work. We first review aspects of FWPs for pedagogical
purposes, then discuss connections to related works motivated by insights from
neuroscience.
Related papers
- NeuroGen: Neural Network Parameter Generation via Large Language Models [32.16082052558773]
Acquiring the parameters of neural networks (NNs) has been one of the most important problems in machine learning.<n>This paper aims to explore the feasibility of a new direction: acquiring NN parameters via large language model generation.
arXiv Detail & Related papers (2025-05-18T15:48:10Z) - De-novo Chemical Reaction Generation by Means of Temporal Convolutional
Neural Networks [3.357271554042638]
We present here a combination of two networks, Recurrent Neural Networks (RNN) and Temporarily Convolutional Neural Networks (TCN)
Recurrent Neural Networks are known for their autoregressive properties and are frequently used in language modelling with direct application to SMILES generation.
The relatively novel TCNs possess similar properties with wide receptive field while obeying the causality required for natural language processing (NLP)
It is shown that different fine-tuning protocols have a profound impact on generative scope of the model when applied on a dataset of interest via transfer learning.
arXiv Detail & Related papers (2023-10-26T12:15:56Z) - Episodic Memory Theory for the Mechanistic Interpretation of Recurrent
Neural Networks [3.683202928838613]
We propose the Episodic Memory Theory (EMT), illustrating that RNNs can be conceptualized as discrete-time analogs of the recently proposed General Sequential Episodic Memory Model.
We introduce a novel set of algorithmic tasks tailored to probe the variable binding behavior in RNNs.
Our empirical investigations reveal that trained RNNs consistently converge to the variable binding circuit, thus indicating universality in the dynamics of RNNs.
arXiv Detail & Related papers (2023-10-03T20:52:37Z) - Artificial Neuronal Ensembles with Learned Context Dependent Gating [0.0]
We introduce Learned Context Dependent Gating (LXDG), a method to flexibly allocate and recall artificial neuronal ensembles'
Activities in the hidden layers of the network are modulated by gates, which are dynamically produced during training.
We demonstrate the ability of this method to alleviate catastrophic forgetting on continual learning benchmarks.
arXiv Detail & Related papers (2023-01-17T20:52:48Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Neural Differential Equations for Learning to Program Neural Nets
Through Continuous Learning Rules [10.924226420146626]
We introduce a novel combination of learning rules and Neural ODEs to build continuous-time sequence processing nets.
This yields continuous-time counterparts of Fast Weight Programmers and linear Transformers.
arXiv Detail & Related papers (2022-06-03T15:48:53Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Skip-Connected Self-Recurrent Spiking Neural Networks with Joint
Intrinsic Parameter and Synaptic Weight Training [14.992756670960008]
We propose a new type of RSNN called Skip-Connected Self-Recurrent SNNs (ScSr-SNNs)
ScSr-SNNs can boost performance by up to 2.55% compared with other types of RSNNs trained by state-of-the-art BP methods.
arXiv Detail & Related papers (2020-10-23T22:27:13Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.