Fault-Tolerant Neural Networks from Biological Error Correction Codes
- URL: http://arxiv.org/abs/2202.12887v3
- Date: Fri, 9 Feb 2024 15:48:36 GMT
- Title: Fault-Tolerant Neural Networks from Biological Error Correction Codes
- Authors: Alexander Zlokapa, Andrew K. Tan, John M. Martyn, Ila R. Fiete, Max
Tegmark, Isaac L. Chuang
- Abstract summary: In the grid cells of the mammalian cortex, analog error correction codes have been observed to protect states against neural spiking noise.
Here, we use these biological error correction codes to develop a universal fault-tolerant neural network that achieves reliable computation if the faultiness of each neuron lies below a sharp threshold.
The discovery of a phase transition from faulty to fault-tolerant neural computation suggests a mechanism for reliable computation in the cortex.
- Score: 45.82537918529782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has been an open question in deep learning if fault-tolerant computation
is possible: can arbitrarily reliable computation be achieved using only
unreliable neurons? In the grid cells of the mammalian cortex, analog error
correction codes have been observed to protect states against neural spiking
noise, but their role in information processing is unclear. Here, we use these
biological error correction codes to develop a universal fault-tolerant neural
network that achieves reliable computation if the faultiness of each neuron
lies below a sharp threshold; remarkably, we find that noisy biological neurons
fall below this threshold. The discovery of a phase transition from faulty to
fault-tolerant neural computation suggests a mechanism for reliable computation
in the cortex and opens a path towards understanding noisy analog systems
relevant to artificial intelligence and neuromorphic computing.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Neuromorphic Auditory Perception by Neural Spiketrum [27.871072042280712]
We introduce a neural spike coding model called spiketrumtemporal, to transform the time-varying analog signals into efficient spike patterns.
The model provides a sparse and efficient coding scheme with precisely controllable spike rate that facilitates training of spiking neural networks in various auditory perception tasks.
arXiv Detail & Related papers (2023-09-11T13:06:19Z) - Neuroevolutionary algorithms driven by neuron coverage metrics for
semi-supervised classification [60.60571130467197]
In some machine learning applications the availability of labeled instances for supervised classification is limited while unlabeled instances are abundant.
We introduce neuroevolutionary approaches that exploit unlabeled instances by using neuron coverage metrics computed on the neural network architecture encoded by each candidate solution.
arXiv Detail & Related papers (2023-03-05T23:38:44Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Quadratic Neuron-empowered Heterogeneous Autoencoder for Unsupervised Anomaly Detection [8.271989261355785]
A novel type of neurons is proposed to replace the inner product in the current neuron with a simplified quadratic function.
To our best knowledge, it is the first heterogeneous autoencoder that is made of different types of neurons.
Experiments show that heterogeneous autoencoders perform competitively compared to other state-of-the-art models.
arXiv Detail & Related papers (2022-04-02T04:19:24Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Information contraction in noisy binary neural networks and its
implications [11.742803725197506]
We consider noisy binary neural networks, where each neuron has a non-zero probability of producing an incorrect output.
Our key finding is a lower bound for the required number of neurons in noisy neural networks, which is first of its kind.
This paper offers new understanding of noisy information processing systems through the lens of information theory.
arXiv Detail & Related papers (2021-01-28T00:01:45Z) - A simple normative network approximates local non-Hebbian learning in
the cortex [12.940770779756482]
Neuroscience experiments demonstrate that the processing of sensory inputs by cortical neurons is modulated by instructive signals.
Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.
Online algorithms can be implemented by neural networks whose synaptic learning rules resemble calcium plateau potential dependent plasticity observed in the cortex.
arXiv Detail & Related papers (2020-10-23T20:49:44Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z) - Bayesian Neural Networks [0.0]
We show how errors in prediction by neural networks can be obtained in principle, and provide the two favoured methods for characterising these errors.
We will also describe how both of these methods have substantial pitfalls when put into practice.
arXiv Detail & Related papers (2020-06-02T09:43:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.