Do we know the operating principles of our computers better than those
of our brain?
- URL: http://arxiv.org/abs/2005.05061v1
- Date: Wed, 6 May 2020 20:41:23 GMT
- Title: Do we know the operating principles of our computers better than those
of our brain?
- Authors: J\'anos V\'egh and \'Ad\'am J. Berki
- Abstract summary: The paper discusses how the conventional principles, components and thinking about computing limit mimicking the biological systems.
We describe what changes will be necessary in the computing paradigms to get closer to the marvelously efficient operation of biological neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing interest in understanding the behavior of the biological
neural networks, and the increasing utilization of artificial neural networks
in different fields and scales, both require a thorough understanding of how
neuromorphic computing works. On the one side, the need to program those
artificial neuron-like elements, and, on the other side, the necessity for a
large number of such elements to cooperate, communicate and compute during
tasks, need to be scrutinized to determine how efficiently conventional
computing can assist in implementing such systems. Some electronic components
bear a surprising resemblance to some biological structures. However, combining
them with components that work using different principles can result in systems
with very poor efficacy. The paper discusses how the conventional principles,
components and thinking about computing limit mimicking the biological systems.
We describe what changes will be necessary in the computing paradigms to get
closer to the marvelously efficient operation of biological neural networks.
Related papers
- Multilevel Interpretability Of Artificial Neural Networks: Leveraging Framework And Methods From Neuroscience [7.180126523609834]
We argue that interpreting both biological and artificial neural systems requires analyzing those systems at multiple levels of analysis.
We present a series of analytical tools that can be used to analyze biological and artificial neural systems.
Overall, the multilevel interpretability framework provides a principled way to tackle neural system complexity.
arXiv Detail & Related papers (2024-08-22T18:17:20Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - The brain as a probabilistic transducer: an evolutionarily plausible
network architecture for knowledge representation, computation, and behavior [14.505867475659274]
We offer a general theoretical framework for brain and behavior that is evolutionarily and computationally plausible.
The brain in our abstract model is a network of nodes and edges. Both nodes and edges in our network have weights and activation levels.
By specifying the innate (genetic) components of the network, we show how evolution could endow the network with initial adaptive rules and goals that are then enriched through learning.
arXiv Detail & Related papers (2021-12-26T14:37:47Z) - A multi-agent model for growing spiking neural networks [0.0]
This project has explored rules for growing the connections between the neurons in Spiking Neural Networks as a learning mechanism.
Results in a simulation environment showed that for a given set of parameters it is possible to reach topologies that reproduce the tested functions.
This project also opens the door to the usage of techniques like genetic algorithms for obtaining the best suited values for the model parameters.
arXiv Detail & Related papers (2020-09-21T15:11:29Z) - On the spatiotemporal behavior in biology-mimicking computing systems [0.0]
The payload performance of conventional computing systems, from single processors to supercomputers, reached its limits the nature enables.
Both the growing demand to cope with "big data" (based on, or assisted by, artificial intelligence) and the interest in understanding the operation of our brain more completely, stimulated the efforts to build biology-mimicking computing systems.
These systems require an unusually large number of processors, which introduces performance limitations and nonlinear scaling.
arXiv Detail & Related papers (2020-09-18T13:53:58Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.