Neuromorphic Nearest-Neighbor Search Using Intel's Pohoiki Springs
- URL: http://arxiv.org/abs/2004.12691v1
- Date: Mon, 27 Apr 2020 10:23:47 GMT
- Title: Neuromorphic Nearest-Neighbor Search Using Intel's Pohoiki Springs
- Authors: E. Paxon Frady, Garrick Orchard, David Florey, Nabil Imam, Ruokun Liu,
Joyesh Mishra, Jonathan Tse, Andreas Wild, Friedrich T. Sommer, Mike Davies
- Abstract summary: In the brain, billions of interconnected neurons perform rapid computations at extremely low energy levels.
Here, we showcase the Pohoiki Springs neuromorphic system, a mesh of 768 interconnected Loihi chips that collectively implement 100 million spiking neurons in silicon.
We demonstrate a scalable approximate k-nearest neighbor (k-NN) algorithm for searching large databases that exploits neuromorphic principles.
- Score: 3.571324984762197
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neuromorphic computing applies insights from neuroscience to uncover
innovations in computing technology. In the brain, billions of interconnected
neurons perform rapid computations at extremely low energy levels by leveraging
properties that are foreign to conventional computing systems, such as temporal
spiking codes and finely parallelized processing units integrating both memory
and computation. Here, we showcase the Pohoiki Springs neuromorphic system, a
mesh of 768 interconnected Loihi chips that collectively implement 100 million
spiking neurons in silicon. We demonstrate a scalable approximate k-nearest
neighbor (k-NN) algorithm for searching large databases that exploits
neuromorphic principles. Compared to state-of-the-art conventional CPU-based
implementations, we achieve superior latency, index build time, and energy
efficiency when evaluated on several standard datasets containing over 1
million high-dimensional patterns. Further, the system supports adding new data
points to the indexed database online in O(1) time unlike all but brute force
conventional k-NN implementations.
Related papers
- Stochastic Spiking Neural Networks with First-to-Spike Coding [7.955633422160267]
Spiking Neural Networks (SNNs) are known for their bio-plausibility and energy efficiency.
In this work, we explore the merger of novel computing and information encoding schemes in SNN architectures.
We investigate the tradeoffs of our proposal in terms of accuracy, inference latency, spiking sparsity, energy consumption, and datasets.
arXiv Detail & Related papers (2024-04-26T22:52:23Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Ultra-low-power Image Classification on Neuromorphic Hardware [3.784976081087904]
Spiking neural networks (SNNs) promise ultra-low-power applications by exploiting temporal and spatial sparsity.
Training such SNNs using backpropagation through time for vision tasks that rely mainly on spatial features is computationally costly.
We propose a temporal ANN-to-SNN conversion method, which we call Quartz, that is based on the time to first spike.
arXiv Detail & Related papers (2023-09-28T18:53:43Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - NeuroBench: A Framework for Benchmarking Neuromorphic Computing
Algorithms and Systems [51.8066436083197]
NeuroBench is a benchmark framework for neuromorphic computing algorithms and systems.
NeuroBench is a collaboratively-designed effort from an open community of nearly 100 co-authors across over 50 institutions in industry and academia.
arXiv Detail & Related papers (2023-04-10T15:12:09Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - The BrainScaleS-2 accelerated neuromorphic system with hybrid plasticity [0.0]
We describe the second generation of the BrainScaleS neuromorphic architecture, emphasizing applications enabled by this architecture.
It combines a custom accelerator core supporting the accelerated physical emulation of bio-inspired spiking neural network primitives with a tightly coupled digital processor and a digital event-routing network.
arXiv Detail & Related papers (2022-01-26T17:13:46Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Compiling Spiking Neural Networks to Mitigate Neuromorphic Hardware
Constraints [0.30458514384586394]
Spiking Neural Networks (SNNs) are efficient of computation-constrained pattern recognition on resource- and power-constrained platforms.
SNNs executed on neuromorphic hardware can further reduce energy consumption of these platforms.
arXiv Detail & Related papers (2020-11-27T19:10:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.