Stochastic Neuromorphic Circuits for Solving MAXCUT
- URL: http://arxiv.org/abs/2210.02588v1
- Date: Wed, 5 Oct 2022 22:37:36 GMT
- Title: Stochastic Neuromorphic Circuits for Solving MAXCUT
- Authors: Bradley H. Theilman, Yipu Wang, Ojas D. Parekh, William Severa, J.
Darby Smith, James B. Aimone
- Abstract summary: Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development.
Neuromorphic computing uses the organizing principles of the nervous system to inspire new parallel computing architectures.
- Score: 0.6067748036747219
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem
that has motivated parallel algorithm development. While approximate algorithms
to MAXCUT offer attractive theoretical guarantees and demonstrate compelling
empirical performance, such approximation approaches can shift the dominant
computational cost to the stochastic sampling operations. Neuromorphic
computing, which uses the organizing principles of the nervous system to
inspire new parallel computing architectures, offers a possible solution. One
ubiquitous feature of natural brains is stochasticity: the individual elements
of biological neural networks possess an intrinsic randomness that serves as a
resource enabling their unique computational capacities. By designing circuits
and algorithms that make use of randomness similarly to natural brains, we
hypothesize that the intrinsic randomness in microelectronics devices could be
turned into a valuable component of a neuromorphic architecture enabling more
efficient computations. Here, we present neuromorphic circuits that transform
the stochastic behavior of a pool of random devices into useful correlations
that drive stochastic solutions to MAXCUT. We show that these circuits perform
favorably in comparison to software solvers and argue that this neuromorphic
hardware implementation provides a path for scaling advantages. This work
demonstrates the utility of combining neuromorphic principles with intrinsic
randomness as a computational resource for new computational architectures.
Related papers
- Topology Optimization of Random Memristors for Input-Aware Dynamic SNN [44.38472635536787]
We introduce pruning optimization for input-aware dynamic memristive spiking neural network (PRIME)
Signal representation-wise, PRIME employs leaky integrate-and-fire neurons to emulate the brain's inherent spiking mechanism.
For reconfigurability, inspired by the brain's dynamic adjustment of computational depth, PRIME employs an input-aware dynamic early stop policy.
arXiv Detail & Related papers (2024-07-26T09:35:02Z) - Voltage-Controlled Magnetoelectric Devices for Neuromorphic Diffusion Process [16.157882920146324]
We develop a spintronic voltage-controlled magnetoelectric memory hardware for the neuromorphic diffusion process.
Together with the non-volatility of magnetic memory, we can achieve high-speed and low-cost computing.
arXiv Detail & Related papers (2024-07-17T02:14:22Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Pruning random resistive memory for optimizing analogue AI [54.21621702814583]
AI models present unprecedented challenges to energy consumption and environmental sustainability.
One promising solution is to revisit analogue computing, a technique that predates digital computing.
Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning.
arXiv Detail & Related papers (2023-11-13T08:59:01Z) - Randomized Polar Codes for Anytime Distributed Machine Learning [66.46612460837147]
We present a novel distributed computing framework that is robust to slow compute nodes, and is capable of both approximate and exact computation of linear operations.
We propose a sequential decoding algorithm designed to handle real valued data while maintaining low computational complexity for recovery.
We demonstrate the potential applications of this framework in various contexts, such as large-scale matrix multiplication and black-box optimization.
arXiv Detail & Related papers (2023-09-01T18:02:04Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Neuromorphic scaling advantages for energy-efficient random walk
computation [0.28144129864580447]
Neuromorphic computing aims to replicate the brain's computational structure and architecture in man-made hardware.
We show that high-degree parallelism and configurability of spiking neuromorphic architectures makes them well-suited to implement random walks via discrete time chains.
We find that NMC platforms, at a sufficient scale, can drastically reduce the energy demands of high-performance computing platforms.
arXiv Detail & Related papers (2021-07-27T19:44:33Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Ultra-Low-Power FDSOI Neural Circuits for Extreme-Edge Neuromorphic
Intelligence [2.6199663901387997]
In-memory computing mixed-signal neuromorphic architectures provide promising ultra-low-power solutions for edge-computing sensory-processing applications.
We present a set of mixed-signal analog/digital circuits that exploit the features of advanced Fully-Depleted Silicon on Insulator (FDSOI) integration processes.
arXiv Detail & Related papers (2020-06-25T09:31:29Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z) - Solving a steady-state PDE using spiking networks and neuromorphic
hardware [0.2698200916728782]
We leverage the parallel and event-driven structure to solve a steady state heat equation using a random walk method.
We position this algorithm as a potential scalable benchmark for neuromorphic systems.
arXiv Detail & Related papers (2020-05-21T21:06:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.