Optimal Learning with Excitatory and Inhibitory synapses
- URL: http://arxiv.org/abs/2005.12330v1
- Date: Mon, 25 May 2020 18:25:54 GMT
- Title: Optimal Learning with Excitatory and Inhibitory synapses
- Authors: Alessandro Ingrosso
- Abstract summary: I study the problem of storing associations between analog signals in the presence of correlations.
I characterize the typical learning performance in terms of the power spectrum of random input and output processes.
- Score: 91.3755431537592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Characterizing the relation between weight structure and input/output
statistics is fundamental for understanding the computational capabilities of
neural circuits. In this work, I study the problem of storing associations
between analog signals in the presence of correlations, using methods from
statistical mechanics. I characterize the typical learning performance in terms
of the power spectrum of random input and output processes. I show that optimal
synaptic weight configurations reach a capacity of 0.5 for any fraction of
excitatory to inhibitory weights and have a peculiar synaptic distribution with
a finite fraction of silent synapses. I further provide a link between typical
learning performance and principal components analysis in single cases. These
results may shed light on the synaptic profile of brain circuits, such as
cerebellar structures, that are thought to engage in processing time-dependent
signals and performing on-line prediction.
Related papers
- Heterogeneous quantization regularizes spiking neural network activity [0.0]
We present a data-blind neuromorphic signal conditioning strategy whereby analog data are normalized and quantized into spike phase representations.
We extend this mechanism by adding a data-aware calibration step whereby the range and density of the quantization weights adapt to accumulated input statistics.
arXiv Detail & Related papers (2024-09-27T02:25:44Z) - Emulating Complex Synapses Using Interlinked Proton Conductors [17.304569471460013]
We experimentally realize the Benna-Fusi artificial complex synapse.
The memory consolidation from coupled storage components is revealed by both numerical simulations and experimental observations.
Our experimental realization of the complex synapse suggests a promising approach to enhance memory capacity and to enable continual learning.
arXiv Detail & Related papers (2024-01-26T18:16:06Z) - Machine learning at the mesoscale: a computation-dissipation bottleneck [77.34726150561087]
We study a computation-dissipation bottleneck in mesoscopic systems used as input-output devices.
Our framework sheds light on a crucial compromise between information compression, input-output computation and dynamic irreversibility induced by non-reciprocal interactions.
arXiv Detail & Related papers (2023-07-05T15:46:07Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Neuronal architecture extracts statistical temporal patterns [1.9662978733004601]
We show how higher-order temporal (co-)fluctuations can be employed to represent and process information.
A simple biologically inspired feedforward neuronal model is able to extract information from up to the third order cumulant to perform time series classification.
arXiv Detail & Related papers (2023-01-24T18:21:33Z) - The Predictive Forward-Forward Algorithm [79.07468367923619]
We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.
We design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit.
PFF efficiently learns to propagate learning signals and updates synapses with forward passes only.
arXiv Detail & Related papers (2023-01-04T05:34:48Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Spatiotemporal Spike-Pattern Selectivity in Single Mixed-Signal Neurons
with Balanced Synapses [0.27998963147546135]
Mixed-signal neuromorphic processors could be used for inference and learning.
We show how inhomogeneous synaptic circuits could be utilized for resource-efficient implementation of network layers.
arXiv Detail & Related papers (2021-06-10T12:04:03Z) - Weakly-correlated synapses promote dimension reduction in deep neural
networks [1.7532045941271799]
How synaptic correlations affect neural correlations to produce disentangled hidden representations remains elusive.
We propose a model of dimension reduction, taking into account pairwise correlations among synapses.
Our theory determines the synaptic-correlation scaling form requiring only mathematical self-consistency.
arXiv Detail & Related papers (2020-06-20T13:11:37Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.