How can neuromorphic hardware attain brain-like functional capabilities?
- URL: http://arxiv.org/abs/2310.16444v1
- Date: Wed, 25 Oct 2023 08:09:52 GMT
- Title: How can neuromorphic hardware attain brain-like functional capabilities?
- Authors: Wolfgang Maass
- Abstract summary: Current neuromorphic hardware employs brain-like spiking neurons instead of standard artificial neurons.
Current architectures and training methods for networks of spiking neurons in NMHW are largely copied from artificial neural networks.
We need to focus on principles that are both easy to implement in NMHW and are likely to support brain-like functionality.
- Score: 0.6345523830122166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Research on neuromorphic computing is driven by the vision that we can
emulate brain-like computing capability, learning capability, and
energy-efficiency in novel hardware. Unfortunately, this vision has so far been
pursued in a half-hearted manner. Most current neuromorphic hardware (NMHW)
employs brain-like spiking neurons instead of standard artificial neurons. This
is a good first step, which does improve the energy-efficiency of some
computations, see \citep{rao2022long} for one of many examples. But current
architectures and training methods for networks of spiking neurons in NMHW are
largely copied from artificial neural networks. Hence it is not surprising that
they inherit many deficiencies of artificial neural networks, rather than
attaining brain-like functional capabilities.
Of course, the brain is very complex, and we cannot implement all its details
in NMHW. Instead, we need to focus on principles that are both easy to
implement in NMHW and are likely to support brain-like functionality. The goal
of this article is to highlight some of them.
Related papers
- Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Hyperdimensional Computing with Spiking-Phasor Neurons [0.9594432031144714]
Symbolic Vector Architectures (VSAs) are a powerful framework for representing compositional reasoning.
We run VSA algorithms on a substrate of spiking neurons that could be run efficiently on neuromorphic hardware.
arXiv Detail & Related papers (2023-02-28T20:09:12Z) - Sequence learning in a spiking neuronal network with memristive synapses [0.0]
A core concept that lies at the heart of brain computation is sequence learning and prediction.
Neuromorphic hardware emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate.
We study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model.
arXiv Detail & Related papers (2022-11-29T21:07:23Z) - Encoding Integers and Rationals on Neuromorphic Computers using Virtual
Neuron [0.0]
We present the virtual neuron as an encoding mechanism for integers and rational numbers.
We show that it can perform an addition operation using 23 nJ of energy on average using a mixed-signal memristor-based neuromorphic processor.
arXiv Detail & Related papers (2022-08-15T23:18:26Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - A Robust Learning Rule for Soft-Bounded Memristive Synapses Competitive
with Supervised Learning in Standard Spiking Neural Networks [0.0]
A view in theoretical neuroscience sees the brain as a function-computing device.
Being able to approximate functions is a fundamental axiom to build upon for future brain research.
In this work we apply a novel supervised learning algorithm - based on controlling niobium-doped strontium titanate memristive synapses - to learning non-trivial multidimensional functions.
arXiv Detail & Related papers (2022-04-12T10:21:22Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Continual Learning with Deep Artificial Neurons [0.0]
We introduce Deep Artificial Neurons (DANs), which are themselves realized as deep neural networks.
We demonstrate that it is possible to meta-learn a single parameter vector, which we dub a neuronal phenotype, shared by all DANs in the network.
We show that a suitable neuronal phenotype can endow a single network with an innate ability to update its synapses with minimal forgetting.
arXiv Detail & Related papers (2020-11-13T17:50:10Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.