Low-Dimensional Manifolds Support Multiplexed Integrations in Recurrent
Neural Networks
- URL: http://arxiv.org/abs/2011.10435v1
- Date: Fri, 20 Nov 2020 14:58:47 GMT
- Title: Low-Dimensional Manifolds Support Multiplexed Integrations in Recurrent
Neural Networks
- Authors: Arnaud Fanthomme (ENS Paris), R\'emi Monasson (ENS Paris)
- Abstract summary: We study the learning dynamics and the representations emerging in Recurrent Neural Networks trained to integrate one or multiple temporal signals.
We show, both for linear and ReLU neurons, that its internal state lives close to a D-dimensional manifold, whose shape is related to the activation function.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the learning dynamics and the representations emerging in Recurrent
Neural Networks trained to integrate one or multiple temporal signals.
Combining analytical and numerical investigations, we characterize the
conditions under which a RNN with n neurons learns to integrate D(n) scalar
signals of arbitrary duration. We show, both for linear and ReLU neurons, that
its internal state lives close to a D-dimensional manifold, whose shape is
related to the activation function. Each neuron therefore carries, to various
degrees, information about the value of all integrals. We discuss the deep
analogy between our results and the concept of mixed selectivity forged by
computational neuroscientists to interpret cortical recordings.
Related papers
- CrEIMBO: Cross Ensemble Interactions in Multi-view Brain Observations [3.3713037259290255]
CrEIMBO (Cross-Ensemble Interactions in Multi-view Brain Observations) identifies the composition of per-session neural ensembles.
CrEIMBO distinguishes session-specific from global (session-invariant) computations by exploring when distinct sub-circuits are active.
We demonstrate CrEIMBO's ability to recover ground truth components in synthetic data and uncover meaningful brain dynamics.
arXiv Detail & Related papers (2024-05-27T17:48:32Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Contrastive-Signal-Dependent Plasticity: Forward-Forward Learning of
Spiking Neural Systems [73.18020682258606]
We develop a neuro-mimetic architecture, composed of spiking neuronal units, where individual layers of neurons operate in parallel.
We propose an event-based generalization of forward-forward learning, which we call contrastive-signal-dependent plasticity (CSDP)
Our experimental results on several pattern datasets demonstrate that the CSDP process works well for training a dynamic recurrent spiking network capable of both classification and reconstruction.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Understanding Neural Coding on Latent Manifolds by Sharing Features and
Dividing Ensembles [3.625425081454343]
Systems neuroscience relies on two complementary views of neural data, characterized by single neuron tuning curves and analysis of population activity.
These two perspectives combine elegantly in neural latent variable models that constrain the relationship between latent variables and neural activity.
We propose feature sharing across neural tuning curves, which significantly improves performance and leads to better-behaved optimization.
arXiv Detail & Related papers (2022-10-06T18:37:49Z) - Neural Integro-Differential Equations [2.001149416674759]
Integro-Differential Equations (IDEs) are generalizations of differential equations that comprise both an integral and a differential component.
NIDE is a framework that models ordinary and integral components ofIDEs using neural networks.
We show that NIDE can decompose dynamics into its Markovian and non-Markovian constituents.
arXiv Detail & Related papers (2022-06-28T20:39:35Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - The distribution of inhibitory neurons in the C. elegans connectome
facilitates self-optimization of coordinated neural activity [78.15296214629433]
The nervous system of the nematode Caenorhabditis elegans exhibits remarkable complexity despite the worm's small size.
A general challenge is to better understand the relationship between neural organization and neural activity at the system level.
We implemented an abstract simulation model of the C. elegans connectome that approximates the neurotransmitter identity of each neuron.
arXiv Detail & Related papers (2020-10-28T23:11:37Z) - A Graph Neural Network Framework for Causal Inference in Brain Networks [0.3392372796177108]
A central question in neuroscience is how self-organizing dynamic interactions in the brain emerge on their relatively static backbone.
We present a graph neural network (GNN) framework to describe functional interactions based on structural anatomical layout.
We show that GNNs are able to capture long-term dependencies in data and also scale up to the analysis of large-scale networks.
arXiv Detail & Related papers (2020-10-14T15:01:21Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Spatio-Temporal Graph Convolution for Resting-State fMRI Analysis [11.85489505372321]
We train a-temporal graph convolutional network (ST-GCN) on short sub-sequences of the BOLD time series to model the non-stationary nature of functional connectivity.
St-GCN is significantly more accurate than common approaches in predicting gender and age based on BOLD signals.
arXiv Detail & Related papers (2020-03-24T01:56:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.