Application based Evaluation of an Efficient Spike-Encoder, "Spiketrum"
- URL: http://arxiv.org/abs/2405.15927v3
- Date: Fri, 31 May 2024 16:23:09 GMT
- Title: Application based Evaluation of an Efficient Spike-Encoder, "Spiketrum"
- Authors: MHD Anas Alsakkal, Runze Wang, Jayawan Wijekoon, Huajin Tang,
- Abstract summary: Spike-based encoders represent information as sequences of spikes or pulses, which are transmitted between neurons.
The Spiketrum encoder efficiently compresses input data using spike trains or code sets.
The paper extensively benchmarks both Spiketrum hardware and its software counterpart against state-of-the-art, biologically-plausible encoders.
- Score: 11.614817134975247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spike-based encoders represent information as sequences of spikes or pulses, which are transmitted between neurons. A prevailing consensus suggests that spike-based approaches demonstrate exceptional capabilities in capturing the temporal dynamics of neural activity and have the potential to provide energy-efficient solutions for low-power applications. The Spiketrum encoder efficiently compresses input data using spike trains or code sets (for non-spiking applications) and is adaptable to both hardware and software implementations, with lossless signal reconstruction capability. The paper proposes and assesses Spiketrum's hardware, evaluating its output under varying spike rates and its classification performance with popular spiking and non-spiking classifiers, and also assessing the quality of information compression and hardware resource utilization. The paper extensively benchmarks both Spiketrum hardware and its software counterpart against state-of-the-art, biologically-plausible encoders. The evaluations encompass benchmarking criteria, including classification accuracy, training speed, and sparsity when using encoder outputs in pattern recognition and classification with both spiking and non-spiking classifiers. Additionally, they consider encoded output entropy and hardware resource utilization and power consumption of the hardware version of the encoders. Results demonstrate Spiketrum's superiority in most benchmarking criteria, making it a promising choice for various applications. It efficiently utilizes hardware resources with low power consumption, achieving high classification accuracy. This work also emphasizes the potential of encoders in spike-based processing to improve the efficiency and performance of neural computing systems.
Related papers
- A PyTorch-Compatible Spike Encoding Framework for Energy-Efficient Neuromorphic Applications [36.52207669429316]
Spiking Neural Networks (SNNs) offer promising energy efficiency advantages, particularly when processing sparse spike trains.
This paper introduces a novel, open-source PyTorch-compatible Python framework for spike encoding.
arXiv Detail & Related papers (2025-04-15T09:50:03Z) - Efficient Evaluation of Quantization-Effects in Neural Codecs [4.897318643396687]
Training neural codecs requires techniques to allow a non-zero gradient across the quantizer.
This paper proposes an efficient evaluation framework for neural codecs using simulated data.
We validate our findings against an internal neural audio gradient and against the state-of-the-art descript-audio-codec.
arXiv Detail & Related papers (2025-02-07T09:11:19Z) - Robust online reconstruction of continuous-time signals from a lean spike train ensemble code [0.0]
This paper presents a signal processing framework that deterministically encodes continuous-time signals into spike trains.
It addresses the questions about representable signal classes and reconstruction bounds.
Experiments on a large audio dataset demonstrate excellent reconstruction accuracy at spike rates as low as one-fifth of the Nyquist rate.
arXiv Detail & Related papers (2024-08-12T06:55:51Z) - AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer [54.713778961605115]
Vision Transformer (ViT) has become one of the most prevailing fundamental backbone networks in the computer vision community.
We propose a novel non-uniform quantizer, dubbed the Adaptive Logarithm AdaLog (AdaLog) quantizer.
arXiv Detail & Related papers (2024-07-17T18:38:48Z) - Spiketrum: An FPGA-based Implementation of a Neuromorphic Cochlea [0.0]
This paper presents a novel FPGA-based neuromorphic cochlea, leveraging the general-purpose spike-coding algorithm, Spiketrum.
The cochlea model excels in transforming audio vibrations into biologically realistic auditory spike trains.
The cochlea's ability to encode diverse sensory information, extending beyond sound waveforms, positions it as a promising sensory input for current and future spike-based intelligent computing systems.
arXiv Detail & Related papers (2024-05-24T20:31:47Z) - Kernel Alignment for Quantum Support Vector Machines Using Genetic
Algorithms [0.0]
We leverage the GASP (Genetic Algorithm for State Preparation) framework for gate sequence selection in QSVM kernel circuits.
Benchmarking against classical and quantum kernels reveals GA-generated circuits matching or surpassing standard techniques.
Our automated framework reduces trial and error, and enables improved QSVM based machine learning performance for finance, healthcare, and materials science applications.
arXiv Detail & Related papers (2023-12-04T01:36:26Z) - Noise-Robust Dense Retrieval via Contrastive Alignment Post Training [89.29256833403167]
Contrastive Alignment POst Training (CAPOT) is a highly efficient finetuning method that improves model robustness without requiring index regeneration.
CAPOT enables robust retrieval by freezing the document encoder while the query encoder learns to align noisy queries with their unaltered root.
We evaluate CAPOT noisy variants of MSMARCO, Natural Questions, and Trivia QA passage retrieval, finding CAPOT has a similar impact as data augmentation with none of its overhead.
arXiv Detail & Related papers (2023-04-06T22:16:53Z) - Efficient spike encoding algorithms for neuromorphic speech recognition [5.182266520875928]
Spiking Neural Networks (SNN) are very effective for neuromorphic processor implementations.
Real-valued signals are encoded as real-valued signals that are not well-suited to SNN.
In this paper, we study four spike encoding methods in the context of a speaker independent digit classification system.
arXiv Detail & Related papers (2022-07-14T17:22:07Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - ZippyPoint: Fast Interest Point Detection, Description, and Matching
through Mixed Precision Discretization [71.91942002659795]
We investigate and adapt network quantization techniques to accelerate inference and enable its use on compute limited platforms.
ZippyPoint, our efficient quantized network with binary descriptors, improves the network runtime speed, the descriptor matching speed, and the 3D model size.
These improvements come at a minor performance degradation as evaluated on the tasks of homography estimation, visual localization, and map-free visual relocalization.
arXiv Detail & Related papers (2022-03-07T18:59:03Z) - Bioinspired Cortex-based Fast Codebook Generation [0.09449650062296822]
We introduce a feature extraction method inspired by sensory cortical networks in the brain.
Dubbed as bioinspired cortex, the algorithm provides convergence to features from streaming signals with superior computational efficiency.
We show herein the superior performance of the cortex model in clustering and vector quantization.
arXiv Detail & Related papers (2022-01-28T18:37:43Z) - Ps and Qs: Quantization-aware pruning for efficient low latency neural
network inference [56.24109486973292]
We study the interplay between pruning and quantization during the training of neural networks for ultra low latency applications.
We find that quantization-aware pruning yields more computationally efficient models than either pruning or quantization alone for our task.
arXiv Detail & Related papers (2021-02-22T19:00:05Z) - ALF: Autoencoder-based Low-rank Filter-sharing for Efficient
Convolutional Neural Networks [63.91384986073851]
We propose the autoencoder-based low-rank filter-sharing technique technique (ALF)
ALF shows a reduction of 70% in network parameters, 61% in operations and 41% in execution time, with minimal loss in accuracy.
arXiv Detail & Related papers (2020-07-27T09:01:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.