Predictive coding in balanced neural networks with noise, chaos and
delays
- URL: http://arxiv.org/abs/2006.14178v1
- Date: Thu, 25 Jun 2020 05:03:27 GMT
- Title: Predictive coding in balanced neural networks with noise, chaos and
delays
- Authors: Jonathan Kadmon, Jonathan Timcheck, and Surya Ganguli
- Abstract summary: We introduce an analytically tractable model of balanced predictive coding, in which the degree of balance and the degree of weight disorder can be dissociated.
Our work provides and solves a general theoretical framework for dissecting the differential contributions neural noise, synaptic disorder, chaos, synaptic delays, and balance to the fidelity of predictive neural codes.
- Score: 24.76770648963407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biological neural networks face a formidable task: performing reliable
computations in the face of intrinsic stochasticity in individual neurons,
imprecisely specified synaptic connectivity, and nonnegligible delays in
synaptic transmission. A common approach to combatting such biological
heterogeneity involves averaging over large redundant networks of $N$ neurons
resulting in coding errors that decrease classically as $1/\sqrt{N}$. Recent
work demonstrated a novel mechanism whereby recurrent spiking networks could
efficiently encode dynamic stimuli, achieving a superclassical scaling in which
coding errors decrease as $1/N$. This specific mechanism involved two key
ideas: predictive coding, and a tight balance, or cancellation between strong
feedforward inputs and strong recurrent feedback. However, the theoretical
principles governing the efficacy of balanced predictive coding and its
robustness to noise, synaptic weight heterogeneity and communication delays
remain poorly understood. To discover such principles, we introduce an
analytically tractable model of balanced predictive coding, in which the degree
of balance and the degree of weight disorder can be dissociated unlike in
previous balanced network models, and we develop a mean field theory of coding
accuracy. Overall, our work provides and solves a general theoretical framework
for dissecting the differential contributions neural noise, synaptic disorder,
chaos, synaptic delays, and balance to the fidelity of predictive neural codes,
reveals the fundamental role that balance plays in achieving superclassical
scaling, and unifies previously disparate models in theoretical neuroscience.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Feedback Favors the Generalization of Neural ODEs [24.342023073252395]
We present feedback neural networks, showing that a feedback loop can flexibly correct the learned latent dynamics of neural ordinary differential equations (neural ODEs)
The feedback neural network is a novel two-DOF neural network, which possesses robust performance in unseen scenarios with no loss of accuracy performance on previous tasks.
arXiv Detail & Related papers (2024-10-14T08:09:45Z) - Spiking Neural Networks with Consistent Mapping Relations Allow High-Accuracy Inference [9.667807887916132]
Spike-based neuromorphic hardware has demonstrated substantial potential in low energy consumption and efficient inference.
Direct training of deep spiking neural networks is challenging, and conversion-based methods still require substantial time delay owing to unresolved conversion errors.
arXiv Detail & Related papers (2024-06-08T06:40:00Z) - Contribute to balance, wire in accordance: Emergence of backpropagation from a simple, bio-plausible neuroplasticity rule [0.0]
We introduce a novel neuroplasticity rule that offers a potential mechanism for implementing BP in the brain.
We demonstrate mathematically that our learning rule precisely replicates BP in layered neural networks without any approximations.
arXiv Detail & Related papers (2024-05-23T03:28:52Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Correlative Information Maximization: A Biologically Plausible Approach
to Supervised Deep Neural Networks without Weight Symmetry [43.584567991256925]
We propose a new normative approach to describe the signal propagation in biological neural networks in both forward and backward directions.
This framework addresses many concerns about the biological-plausibility of conventional artificial neural networks and the backpropagation algorithm.
Our approach provides a natural resolution to the weight symmetry problem between forward and backward signal propagation paths.
arXiv Detail & Related papers (2023-06-07T22:14:33Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Formalizing Generalization and Robustness of Neural Networks to Weight
Perturbations [58.731070632586594]
We provide the first formal analysis for feed-forward neural networks with non-negative monotone activation functions against weight perturbations.
We also design a new theory-driven loss function for training generalizable and robust neural networks against weight perturbations.
arXiv Detail & Related papers (2021-03-03T06:17:03Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.