Using noise to probe recurrent neural network structure and prune
synapses
- URL: http://arxiv.org/abs/2011.07334v2
- Date: Fri, 16 Jul 2021 18:07:01 GMT
- Title: Using noise to probe recurrent neural network structure and prune
synapses
- Authors: Eli Moore and Rishidev Chaudhuri
- Abstract summary: Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning.
Noise is ubiquitous in neural systems, and often considered an irritant to be overcome.
Here we suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant.
- Score: 8.37609145576126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many networks in the brain are sparsely connected, and the brain eliminates
synapses during development and learning. How could the brain decide which
synapses to prune? In a recurrent network, determining the importance of a
synapse between two neurons is a difficult computational problem, depending on
the role that both neurons play and on all possible pathways of information
flow between them. Noise is ubiquitous in neural systems, and often considered
an irritant to be overcome. Here we suggest that noise could play a functional
role in synaptic pruning, allowing the brain to probe network structure and
determine which synapses are redundant. We construct a simple, local,
unsupervised plasticity rule that either strengthens or prunes synapses using
only synaptic weight and the noise-driven covariance of the neighboring
neurons. For a subset of linear and rectified-linear networks, we prove that
this rule preserves the spectrum of the original matrix and hence preserves
network dynamics even when the fraction of pruned synapses asymptotically
approaches 1. The plasticity rule is biologically-plausible and may suggest a
new role for noise in neural computation.
Related papers
- Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Deep neural networks as nested dynamical systems [0.0]
An analogy is often made between deep neural networks and actual brains, suggested by the nomenclature itself.
This article makes the case that the analogy should be different.
Since the "neurons" in deep neural networks are managing the changing weights, they are more akin to the synapses in the brain.
arXiv Detail & Related papers (2021-11-01T23:37:54Z) - Rich dynamics caused by known biological brain network features
resulting in stateful networks [0.0]
Internal state of a neuron/network becomes a defining factor for how information is represented within the network.
In this study we assessed the impact of varying specific intrinsic parameters of the neurons that enriched network state dynamics.
We found such effects were more profound in sparsely connected networks than in densely connected networks.
arXiv Detail & Related papers (2021-06-03T08:32:43Z) - Condition Integration Memory Network: An Interpretation of the Meaning
of the Neuronal Design [10.421465303670638]
This document introduces a hypothetical framework for the functional nature of primitive neural networks.
It analyzes the idea that the activity of neurons and synapses can symbolically reenact the dynamic changes in the world.
It achieves this without participating in an algorithmic structure.
arXiv Detail & Related papers (2021-05-21T05:59:27Z) - Understanding and mitigating noise in trained deep neural networks [0.0]
We study the propagation of noise in deep neural networks comprising noisy nonlinear neurons in trained fully connected layers.
We find that noise accumulation is generally bound, and adding additional network layers does not worsen the signal to noise ratio beyond a limit.
We identify criteria allowing engineers to design noise-resilient novel neural network hardware.
arXiv Detail & Related papers (2021-03-12T17:16:26Z) - Continual Learning with Deep Artificial Neurons [0.0]
We introduce Deep Artificial Neurons (DANs), which are themselves realized as deep neural networks.
We demonstrate that it is possible to meta-learn a single parameter vector, which we dub a neuronal phenotype, shared by all DANs in the network.
We show that a suitable neuronal phenotype can endow a single network with an innate ability to update its synapses with minimal forgetting.
arXiv Detail & Related papers (2020-11-13T17:50:10Z) - The distribution of inhibitory neurons in the C. elegans connectome
facilitates self-optimization of coordinated neural activity [78.15296214629433]
The nervous system of the nematode Caenorhabditis elegans exhibits remarkable complexity despite the worm's small size.
A general challenge is to better understand the relationship between neural organization and neural activity at the system level.
We implemented an abstract simulation model of the C. elegans connectome that approximates the neurotransmitter identity of each neuron.
arXiv Detail & Related papers (2020-10-28T23:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.