Neuromorphic Computing: A Theoretical Framework for Time, Space, and Energy Scaling
- URL: http://arxiv.org/abs/2507.17886v1
- Date: Wed, 23 Jul 2025 19:28:23 GMT
- Title: Neuromorphic Computing: A Theoretical Framework for Time, Space, and Energy Scaling
- Authors: James B Aimone,
- Abstract summary: We show how NMC should be seen as general-purpose and programmable.<n>We show that the time and space scaling of NMC is equivalent to that of a theoretically infinite processor conventional system.<n>The unique characteristics of NMC architectures make it well suited for different classes of algorithms.
- Score: 0.174048653626208
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neuromorphic computing (NMC) is increasingly viewed as a low-power alternative to conventional von Neumann architectures such as central processing units (CPUs) and graphics processing units (GPUs), however the computational value proposition has been difficult to define precisely. Here, we explain how NMC should be seen as general-purpose and programmable even though it differs considerably from a conventional stored-program architecture. We show that the time and space scaling of NMC is equivalent to that of a theoretically infinite processor conventional system, however the energy scaling is significantly different. Specifically, the energy of conventional systems scales with absolute algorithm work, whereas the energy of neuromorphic systems scales with the derivative of algorithm state. The unique characteristics of NMC architectures make it well suited for different classes of algorithms than conventional multi-core systems like GPUs that have been optimized for dense numerical applications such as linear algebra. In contrast, the unique characteristics of NMC make it ideally suited for scalable and sparse algorithms whose activity is proportional to an objective function, such as iterative optimization and large-scale sampling (e.g., Monte Carlo).
Related papers
- Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems [49.819436680336786]
We propose an efficient transformed Gaussian process state-space model (ETGPSSM) for scalable and flexible modeling of high-dimensional, non-stationary dynamical systems.<n>Specifically, our ETGPSSM integrates a single shared GP with input-dependent normalizing flows, yielding an expressive implicit process prior that captures complex, non-stationary transition dynamics.<n>Our ETGPSSM outperforms existing GPSSMs and neural network-based SSMs in terms of computational efficiency and accuracy.
arXiv Detail & Related papers (2025-03-24T03:19:45Z) - Optimised Hybrid Classical-Quantum Algorithm for Accelerated Solution of Sparse Linear Systems [0.0]
This paper introduces a hybrid classical-quantum algorithm that combines preconditioning techniques with the HHL algorithm to solve sparse linear systems more efficiently.
We show that the proposed approach surpasses traditional methods in speed and scalability but also mitigates some of the inherent limitations of quantum algorithms.
arXiv Detail & Related papers (2024-10-03T11:36:14Z) - Randomized Polar Codes for Anytime Distributed Machine Learning [66.46612460837147]
We present a novel distributed computing framework that is robust to slow compute nodes, and is capable of both approximate and exact computation of linear operations.
We propose a sequential decoding algorithm designed to handle real valued data while maintaining low computational complexity for recovery.
We demonstrate the potential applications of this framework in various contexts, such as large-scale matrix multiplication and black-box optimization.
arXiv Detail & Related papers (2023-09-01T18:02:04Z) - Open the box of digital neuromorphic processor: Towards effective
algorithm-hardware co-design [0.08431877864777441]
We present a practical approach to enable algorithm designers to accurately benchmark SNN algorithms.
We show the energy efficiency of SNN algorithms for video processing and online learning.
arXiv Detail & Related papers (2023-03-27T14:03:11Z) - A Heterogeneous Parallel Non-von Neumann Architecture System for
Accurate and Efficient Machine Learning Molecular Dynamics [9.329011150399726]
This paper proposes a special-purpose system to achieve high-accuracy and high-efficiency machine learning (ML) calculations.
The system consists of field programmable gate array (FPGA) and application specific integrated circuit (ASIC) working in heterogeneous parallelization.
arXiv Detail & Related papers (2023-03-26T05:43:49Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Stochastic Neuromorphic Circuits for Solving MAXCUT [0.6067748036747219]
Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development.
Neuromorphic computing uses the organizing principles of the nervous system to inspire new parallel computing architectures.
arXiv Detail & Related papers (2022-10-05T22:37:36Z) - Automatic and effective discovery of quantum kernels [41.61572387137452]
Quantum computing can empower machine learning models by enabling kernel machines to leverage quantum kernels for representing similarity measures between data.<n>We present an approach to this problem, which employs optimization techniques, similar to those used in neural architecture search and AutoML.<n>The results obtained by testing our approach on a high-energy physics problem demonstrate that, in the best-case scenario, we can either match or improve testing accuracy with respect to the manual design approach.
arXiv Detail & Related papers (2022-09-22T16:42:14Z) - Decomposition of Matrix Product States into Shallow Quantum Circuits [62.5210028594015]
tensor network (TN) algorithms can be mapped to parametrized quantum circuits (PQCs)
We propose a new protocol for approximating TN states using realistic quantum circuits.
Our results reveal one particular protocol, involving sequential growth and optimization of the quantum circuit, to outperform all other methods.
arXiv Detail & Related papers (2022-09-01T17:08:41Z) - Neuromorphic scaling advantages for energy-efficient random walk
computation [0.28144129864580447]
Neuromorphic computing aims to replicate the brain's computational structure and architecture in man-made hardware.
We show that high-degree parallelism and configurability of spiking neuromorphic architectures makes them well-suited to implement random walks via discrete time chains.
We find that NMC platforms, at a sufficient scale, can drastically reduce the energy demands of high-performance computing platforms.
arXiv Detail & Related papers (2021-07-27T19:44:33Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Fixed Depth Hamiltonian Simulation via Cartan Decomposition [59.20417091220753]
We present a constructive algorithm for generating quantum circuits with time-independent depth.
We highlight our algorithm for special classes of models, including Anderson localization in one dimensional transverse field XY model.
In addition to providing exact circuits for a broad set of spin and fermionic models, our algorithm provides broad analytic and numerical insight into optimal Hamiltonian simulations.
arXiv Detail & Related papers (2021-04-01T19:06:00Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.