Bottom-up and top-down approaches for the design of neuromorphic
processing systems: Tradeoffs and synergies between natural and artificial
intelligence
- URL: http://arxiv.org/abs/2106.01288v2
- Date: Fri, 12 May 2023 22:20:46 GMT
- Title: Bottom-up and top-down approaches for the design of neuromorphic
processing systems: Tradeoffs and synergies between natural and artificial
intelligence
- Authors: Charlotte Frenkel, David Bol, Giacomo Indiveri
- Abstract summary: Moore's law has driven exponential computing power expectations, its nearing end calls for new avenues for improving the overall system performance.
One of these avenues is the exploration of alternative brain-inspired computing architectures that aim at achieving the flexibility and computational efficiency of biological neural processing systems.
We provide a comprehensive overview of the field, highlighting the different levels of granularity at which this paradigm shift is realized.
- Score: 3.874729481138221
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Moore's law has driven exponential computing power expectations, its
nearing end calls for new avenues for improving the overall system performance.
One of these avenues is the exploration of alternative brain-inspired computing
architectures that aim at achieving the flexibility and computational
efficiency of biological neural processing systems. Within this context,
neuromorphic engineering represents a paradigm shift in computing based on the
implementation of spiking neural network architectures in which processing and
memory are tightly co-located. In this paper, we provide a comprehensive
overview of the field, highlighting the different levels of granularity at
which this paradigm shift is realized and comparing design approaches that
focus on replicating natural intelligence (bottom-up) versus those that aim at
solving practical artificial intelligence applications (top-down). First, we
present the analog, mixed-signal and digital circuit design styles, identifying
the boundary between processing and memory through time multiplexing, in-memory
computation, and novel devices. Then, we highlight the key tradeoffs for each
of the bottom-up and top-down design approaches, survey their silicon
implementations, and carry out detailed comparative analyses to extract design
guidelines. Finally, we identify necessary synergies and missing elements
required to achieve a competitive advantage for neuromorphic systems over
conventional machine-learning accelerators in edge computing applications, and
outline the key ingredients for a framework toward neuromorphic intelligence.
Related papers
- Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Spike-based Neuromorphic Computing for Next-Generation Computer Vision [1.2367795537503197]
Neuromorphic Computing promises orders of magnitude improvement in energy efficiency compared to traditional von Neumann computing paradigm.
The goal is to develop an adaptive, fault-tolerant, low-footprint, fast, low-energy intelligent system by learning and emulating brain functionality.
arXiv Detail & Related papers (2023-10-15T01:05:35Z) - NeuroBench: A Framework for Benchmarking Neuromorphic Computing
Algorithms and Systems [51.8066436083197]
NeuroBench is a benchmark framework for neuromorphic computing algorithms and systems.
NeuroBench is a collaboratively-designed effort from an open community of nearly 100 co-authors across over 50 institutions in industry and academia.
arXiv Detail & Related papers (2023-04-10T15:12:09Z) - Integration of Neuromorphic AI in Event-Driven Distributed Digitized
Systems: Concepts and Research Directions [0.2746383075956081]
We describe the current landscape of neuromorphic computing, focusing on characteristics that pose integration challenges.
We propose a microservice-based framework for neuromorphic systems integration, consisting of a neuromorphic-system proxy.
We also present concepts that could serve as a basis for the realization of this framework.
arXiv Detail & Related papers (2022-10-20T12:09:29Z) - A deep learning theory for neural networks grounded in physics [2.132096006921048]
We argue that building large, fast and efficient neural networks on neuromorphic architectures requires rethinking the algorithms to implement and train them.
Our framework applies to a very broad class of models, namely systems whose state or dynamics are described by variational equations.
arXiv Detail & Related papers (2021-03-18T02:12:48Z) - Photonics for artificial intelligence and neuromorphic computing [52.77024349608834]
Photonic integrated circuits have enabled ultrafast artificial neural networks.
Photonic neuromorphic systems offer sub-nanosecond latencies.
These systems could address the growing demand for machine learning and artificial intelligence.
arXiv Detail & Related papers (2020-10-30T21:41:44Z) - Ultra-Low-Power FDSOI Neural Circuits for Extreme-Edge Neuromorphic
Intelligence [2.6199663901387997]
In-memory computing mixed-signal neuromorphic architectures provide promising ultra-low-power solutions for edge-computing sensory-processing applications.
We present a set of mixed-signal analog/digital circuits that exploit the features of advanced Fully-Depleted Silicon on Insulator (FDSOI) integration processes.
arXiv Detail & Related papers (2020-06-25T09:31:29Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.