Bottom-up and top-down approaches for the design of neuromorphic
processing systems: Tradeoffs and synergies between natural and artificial
intelligence
- URL: http://arxiv.org/abs/2106.01288v2
- Date: Fri, 12 May 2023 22:20:46 GMT
- Title: Bottom-up and top-down approaches for the design of neuromorphic
processing systems: Tradeoffs and synergies between natural and artificial
intelligence
- Authors: Charlotte Frenkel, David Bol, Giacomo Indiveri
- Abstract summary: Moore's law has driven exponential computing power expectations, its nearing end calls for new avenues for improving the overall system performance.
One of these avenues is the exploration of alternative brain-inspired computing architectures that aim at achieving the flexibility and computational efficiency of biological neural processing systems.
We provide a comprehensive overview of the field, highlighting the different levels of granularity at which this paradigm shift is realized.
- Score: 3.874729481138221
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Moore's law has driven exponential computing power expectations, its
nearing end calls for new avenues for improving the overall system performance.
One of these avenues is the exploration of alternative brain-inspired computing
architectures that aim at achieving the flexibility and computational
efficiency of biological neural processing systems. Within this context,
neuromorphic engineering represents a paradigm shift in computing based on the
implementation of spiking neural network architectures in which processing and
memory are tightly co-located. In this paper, we provide a comprehensive
overview of the field, highlighting the different levels of granularity at
which this paradigm shift is realized and comparing design approaches that
focus on replicating natural intelligence (bottom-up) versus those that aim at
solving practical artificial intelligence applications (top-down). First, we
present the analog, mixed-signal and digital circuit design styles, identifying
the boundary between processing and memory through time multiplexing, in-memory
computation, and novel devices. Then, we highlight the key tradeoffs for each
of the bottom-up and top-down design approaches, survey their silicon
implementations, and carry out detailed comparative analyses to extract design
guidelines. Finally, we identify necessary synergies and missing elements
required to achieve a competitive advantage for neuromorphic systems over
conventional machine-learning accelerators in edge computing applications, and
outline the key ingredients for a framework toward neuromorphic intelligence.
Related papers
- A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Spike-based Neuromorphic Computing for Next-Generation Computer Vision [1.2367795537503197]
Neuromorphic Computing promises orders of magnitude improvement in energy efficiency compared to traditional von Neumann computing paradigm.
The goal is to develop an adaptive, fault-tolerant, low-footprint, fast, low-energy intelligent system by learning and emulating brain functionality.
arXiv Detail & Related papers (2023-10-15T01:05:35Z) - NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems [50.101188703826686]
We present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems.
NeuroBench is a collaboratively-designed effort from an open community of researchers across industry and academia.
arXiv Detail & Related papers (2023-04-10T15:12:09Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Integration of Neuromorphic AI in Event-Driven Distributed Digitized
Systems: Concepts and Research Directions [0.2746383075956081]
We describe the current landscape of neuromorphic computing, focusing on characteristics that pose integration challenges.
We propose a microservice-based framework for neuromorphic systems integration, consisting of a neuromorphic-system proxy.
We also present concepts that could serve as a basis for the realization of this framework.
arXiv Detail & Related papers (2022-10-20T12:09:29Z) - A deep learning theory for neural networks grounded in physics [2.132096006921048]
We argue that building large, fast and efficient neural networks on neuromorphic architectures requires rethinking the algorithms to implement and train them.
Our framework applies to a very broad class of models, namely systems whose state or dynamics are described by variational equations.
arXiv Detail & Related papers (2021-03-18T02:12:48Z) - Ultra-Low-Power FDSOI Neural Circuits for Extreme-Edge Neuromorphic
Intelligence [2.6199663901387997]
In-memory computing mixed-signal neuromorphic architectures provide promising ultra-low-power solutions for edge-computing sensory-processing applications.
We present a set of mixed-signal analog/digital circuits that exploit the features of advanced Fully-Depleted Silicon on Insulator (FDSOI) integration processes.
arXiv Detail & Related papers (2020-06-25T09:31:29Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.