RF-Photonic Deep Learning Processor with Shannon-Limited Data Movement
- URL: http://arxiv.org/abs/2207.06883v2
- Date: Thu, 6 Jun 2024 21:32:35 GMT
- Title: RF-Photonic Deep Learning Processor with Shannon-Limited Data Movement
- Authors: Ronald Davis III, Zaijun Chen, Ryan Hamerly, Dirk Englund,
- Abstract summary: Optical neural networks (ONNs) are promising accelerators with ultra-low latency and energy consumption.
We introduce our multiplicative analog frequency transform ONN (MAFT-ONN) that encodes the data in the frequency domain.
We experimentally demonstrate the first hardware accelerator that computes fully-analog deep learning on raw RF signals.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Edholm's Law predicts exponential growth in data rate and spectrum bandwidth for communications and is forecasted to remain true for the upcoming deployment of 6G. Compounding this issue is the exponentially increasing demand for deep neural network (DNN) compute, including DNNs for signal processing. However, the slowing of Moore's Law due to the limitations of transistor-based electronics means that completely new paradigms for computing will be required to meet these increasing demands for advanced communications. Optical neural networks (ONNs) are promising DNN accelerators with ultra-low latency and energy consumption. Yet state-of-the-art ONNs struggle with scalability and implementing linear with in-line nonlinear operations. Here we introduce our multiplicative analog frequency transform ONN (MAFT-ONN) that encodes the data in the frequency domain, achieves matrix-vector products in a single shot using photoelectric multiplication, and uses a single electro-optic modulator for the nonlinear activation of all neurons in each layer. We experimentally demonstrate the first hardware accelerator that computes fully-analog deep learning on raw RF signals, performing single-shot modulation classification with 85% accuracy, where a 'majority vote' multi-measurement scheme can boost the accuracy to 95% within 5 consecutive measurements. In addition, we demonstrate frequency-domain finite impulse response (FIR) linear-time-invariant (LTI) operations, enabling a powerful combination of traditional and AI signal processing. We also demonstrate the scalability of our architecture by computing nearly 4 million fully-analog multiplies-and-accumulates for MNIST digit classification. Our latency estimation model shows that due to the Shannon capacity-limited analog data movement, MAFT-ONN is hundreds of times faster than traditional RF receivers operating at their theoretical peak performance.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Multiscale fusion enhanced spiking neural network for invasive BCI neural signal decoding [13.108613110379961]
This paper presents a novel approach utilizing a Multiscale Fusion Fusion Spiking Neural Network (MFSNN)
MFSNN emulates the parallel processing and multiscale feature fusion seen in human visual perception to enable real-time, efficient, and energy-conserving neural signal decoding.
MFSNN surpasses traditional artificial neural network methods, such as enhanced GRU, in both accuracy and computational efficiency.
arXiv Detail & Related papers (2024-09-14T09:53:30Z) - Deep Learning for Low-Latency, Quantum-Ready RF Sensing [2.5393702482222813]
Recent work has shown the promise of applying deep learning to enhance software processing of radio frequency (RF) signals.
In this paper, we describe our implementations of quantum-ready machine learning approaches for RF signal classification.
arXiv Detail & Related papers (2024-04-27T17:22:12Z) - 1-bit Quantized On-chip Hybrid Diffraction Neural Network Enabled by Authentic All-optical Fully-connected Architecture [4.594367761345624]
This study introduces the Hybrid Diffraction Neural Network (HDNN), a novel architecture that incorporates matrix multiplication into DNNs.
utilizing a singular phase modulation layer and an amplitude modulation layer, the trained neural network demonstrated remarkable accuracies of 96.39% and 89% in digit recognition tasks.
arXiv Detail & Related papers (2024-04-11T02:54:17Z) - ADC/DAC-Free Analog Acceleration of Deep Neural Networks with Frequency
Transformation [2.7488316163114823]
This paper proposes a novel approach to an energy-efficient acceleration of frequency-domain neural networks by utilizing analog-domain frequency-based tensor transformations.
Our approach achieves more compact cells by eliminating the need for trainable parameters in the transformation matrix.
On a 16$times$16 crossbars, for 8-bit input processing, the proposed approach achieves the energy efficiency of 1602 tera operations per second per Watt.
arXiv Detail & Related papers (2023-09-04T19:19:39Z) - Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Single-Shot Optical Neural Network [55.41644538483948]
'Weight-stationary' analog optical and electronic hardware has been proposed to reduce the compute resources required by deep neural networks.
We present a scalable, single-shot-per-layer weight-stationary optical processor.
arXiv Detail & Related papers (2022-05-18T17:49:49Z) - Time-coded Spiking Fourier Transform in Neuromorphic Hardware [4.432142139656578]
In this work, we propose a time-based spiking neural network that is mathematically equivalent tothe Fourier transform.
We implemented the network in the neuromorphic chip Loihi and conductedexperiments on five different real scenarios with an automotive frequency modulated continuouswave radar.
arXiv Detail & Related papers (2022-02-25T12:15:46Z) - Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding [94.40747235081466]
We propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems.
We develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter.
arXiv Detail & Related papers (2021-10-22T20:49:02Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.