Integrate-and-fire circuit for converting analog signals to spikes using
phase encoding
- URL: http://arxiv.org/abs/2310.02055v1
- Date: Tue, 3 Oct 2023 13:55:46 GMT
- Title: Integrate-and-fire circuit for converting analog signals to spikes using
phase encoding
- Authors: Javier Lopez-Randulfe, Nico Reeb and Alois Knoll
- Abstract summary: Two strategies are promising for achieving low energy consumption and fast processing speeds in end-to-end neuromorphic applications.
We propose an adaptive control of the refractory period of the leaky integrate-and-fire neuron model for encoding continuous analog signals into a train of time-coded spikes.
A digital neuromorphic chip processed the generated spike trains and computed the signal's frequency spectrum using a spiking version of the Fourier transform.
- Score: 4.485617023466674
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Processing sensor data with spiking neural networks on digital neuromorphic
chips requires converting continuous analog signals into spike pulses. Two
strategies are promising for achieving low energy consumption and fast
processing speeds in end-to-end neuromorphic applications. First, to directly
encode analog signals to spikes to bypass the need for an analog-to-digital
converter (ADC). Second, to use temporal encoding techniques to maximize the
spike sparsity, which is a crucial parameter for fast and efficient
neuromorphic processing. In this work, we propose an adaptive control of the
refractory period of the leaky integrate-and-fire (LIF) neuron model for
encoding continuous analog signals into a train of time-coded spikes. The
LIF-based encoder generates phase-encoded spikes that are compatible with
digital hardware. We implemented the neuron model on a physical circuit and
tested it with different electric signals. A digital neuromorphic chip
processed the generated spike trains and computed the signal's frequency
spectrum using a spiking version of the Fourier transform. We tested the
prototype circuit on electric signals up to 1 KHz. Thus, we provide an
end-to-end neuromorphic application that generates the frequency spectrum of an
electric signal without the need for an ADC or a digital signal processing
algorithm.
Related papers
- OFDM-Standard Compatible SC-NOFS Waveforms for Low-Latency and Jitter-Tolerance Industrial IoT Communications [53.398544571833135]
This work proposes a spectrally efficient irregular Sinc (irSinc) shaping technique, revisiting the traditional Sinc back to 1924.
irSinc yields a signal with increased spectral efficiency without sacrificing error performance.
Our signal achieves faster data transmission within the same spectral bandwidth through 5G standard signal configuration.
arXiv Detail & Related papers (2024-06-07T09:20:30Z) - DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous
spiking neural network processor [2.9175555050594975]
We present a brain-inspired platform for prototyping real-time event-based Spiking Neural Networks (SNNs)
The system proposed supports the direct emulation of dynamic and realistic neural processing phenomena such as short-term plasticity, NMDA gating, AMPA diffusion, homeostasis, spike frequency adaptation, conductance-based dendritic compartments and spike transmission delays.
The flexibility to emulate different biologically plausible neural networks, and the chip's ability to monitor both population and single neuron signals in real-time, allow to develop and validate complex models of neural processing for both basic research and edge-computing applications.
arXiv Detail & Related papers (2023-10-01T03:48:16Z) - SPAIC: A sub-$\mu$W/Channel, 16-Channel General-Purpose Event-Based
Analog Front-End with Dual-Mode Encoders [6.6017549029623535]
Low-power event-based analog front-ends are crucial to build efficient neuromorphic processing systems.
We present a novel, highly analog front-end chip, denoted as SPAIC (signal-to-spike converter for analog AI computation)
It offers a general-purpose dual-mode analog signal-to-spike encoding with delta modulation and pulse frequency modulation, with tunable frequency bands.
arXiv Detail & Related papers (2023-08-31T19:53:04Z) - Diffsound: Discrete Diffusion Model for Text-to-sound Generation [78.4128796899781]
We propose a novel text-to-sound generation framework that consists of a text encoder, a Vector Quantized Variational Autoencoder (VQ-VAE), a decoder, and a vocoder.
The framework first uses the decoder to transfer the text features extracted from the text encoder to a mel-spectrogram with the help of VQ-VAE, and then the vocoder is used to transform the generated mel-spectrogram into a waveform.
arXiv Detail & Related papers (2022-07-20T15:41:47Z) - RF-Photonic Deep Learning Processor with Shannon-Limited Data Movement [0.0]
Optical neural networks (ONNs) are promising accelerators with ultra-low latency and energy consumption.
We introduce our multiplicative analog frequency transform ONN (MAFT-ONN) that encodes the data in the frequency domain.
We experimentally demonstrate the first hardware accelerator that computes fully-analog deep learning on raw RF signals.
arXiv Detail & Related papers (2022-07-08T16:37:13Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - Time-coded Spiking Fourier Transform in Neuromorphic Hardware [4.432142139656578]
In this work, we propose a time-based spiking neural network that is mathematically equivalent tothe Fourier transform.
We implemented the network in the neuromorphic chip Loihi and conductedexperiments on five different real scenarios with an automotive frequency modulated continuouswave radar.
arXiv Detail & Related papers (2022-02-25T12:15:46Z) - Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding [94.40747235081466]
We propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems.
We develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter.
arXiv Detail & Related papers (2021-10-22T20:49:02Z) - An Adaptive Sampling and Edge Detection Approach for Encoding Static
Images for Spiking Neural Networks [0.2519906683279152]
Spiking neural networks (SNNs) are considered to be the third generation of artificial neural networks.
We propose a method for encoding static images into temporal spike trains using edge detection and an adaptive signal sampling method.
arXiv Detail & Related papers (2021-10-19T19:31:52Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.