A complete, parallel and autonomous photonic neural network in a
semiconductor multimode laser
- URL: http://arxiv.org/abs/2012.11153v1
- Date: Mon, 21 Dec 2020 07:03:43 GMT
- Title: A complete, parallel and autonomous photonic neural network in a
semiconductor multimode laser
- Authors: Xavier Porte, Anas Skalli, Nasibeh Haghighi, Stephan Reitzenstein,
James A. Lott, Daniel Brunner
- Abstract summary: We show how a fully parallel and fully implemented photonic neural network can be realized using spatially distributed modes of an efficient and fast semiconductor laser.
Importantly, all neural network connections are realized in hardware, and our processor produces results without pre- or post-processing.
We train the readout weights to perform 2-bit header recognition, a 2-bit XOR and 2-bit digital analog conversion, and obtain 0.9 10-3 and 2.9 10-2 error rates for digit recognition and XOR, respectively.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks are one of the disruptive computing concepts of our time.
However, they fundamentally differ from classical, algorithmic computing in a
number of fundamental aspects. These differences result in equally fundamental,
severe and relevant challenges for neural network computing using current
computing substrates. Neural networks urge for parallelism across the entire
processor and for a co-location of memory and arithmetic, i.e. beyond von
Neumann architectures. Parallelism in particular made photonics a highly
promising platform, yet until now scalable and integratable concepts are
scarce. Here, we demonstrate for the first time how a fully parallel and fully
implemented photonic neural network can be realized using spatially distributed
modes of an efficient and fast semiconductor laser. Importantly, all neural
network connections are realized in hardware, and our processor produces
results without pre- or post-processing. 130+ nodes are implemented in a
large-area vertical cavity surface emitting laser, input and output weights are
realized via the complex transmission matrix of a multimode fiber and a digital
micro-mirror array, respectively. We train the readout weights to perform 2-bit
header recognition, a 2-bit XOR and 2-bit digital analog conversion, and obtain
< 0.9 10^-3 and 2.9 10^-2 error rates for digit recognition and XOR,
respectively. Finally, the digital analog conversion can be realized with a
standard deviation of only 5.4 10^-2. Our system is scalable to much larger
sizes and to bandwidths in excess of 20 GHz.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - RF-Photonic Deep Learning Processor with Shannon-Limited Data Movement [0.0]
Optical neural networks (ONNs) are promising accelerators with ultra-low latency and energy consumption.
We introduce our multiplicative analog frequency transform ONN (MAFT-ONN) that encodes the data in the frequency domain.
We experimentally demonstrate the first hardware accelerator that computes fully-analog deep learning on raw RF signals.
arXiv Detail & Related papers (2022-07-08T16:37:13Z) - Single-Shot Optical Neural Network [55.41644538483948]
'Weight-stationary' analog optical and electronic hardware has been proposed to reduce the compute resources required by deep neural networks.
We present a scalable, single-shot-per-layer weight-stationary optical processor.
arXiv Detail & Related papers (2022-05-18T17:49:49Z) - Instant Neural Graphics Primitives with a Multiresolution Hash Encoding [67.33850633281803]
We present a versatile new input encoding that permits the use of a smaller network without sacrificing quality.
A small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through a gradient descent.
We achieve a combined speed of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds.
arXiv Detail & Related papers (2022-01-16T07:22:47Z) - Monolithic Silicon Photonic Architecture for Training Deep Neural
Networks with Direct Feedback Alignment [0.6501025489527172]
We propose on-chip training of neural networks enabled by a CMOS-compatible silicon photonic architecture.
Our scheme employs the direct feedback alignment training algorithm, which trains neural networks using error feedback rather than error backpropagation.
We experimentally demonstrate training a deep neural network with the MNIST dataset using on-chip MAC operation results.
arXiv Detail & Related papers (2021-11-12T18:31:51Z) - Parallel Simulation of Quantum Networks with Distributed Quantum State
Management [56.24769206561207]
We identify requirements for parallel simulation of quantum networks and develop the first parallel discrete event quantum network simulator.
Our contributions include the design and development of a quantum state manager that maintains shared quantum information distributed across multiple processes.
We release the parallel SeQUeNCe simulator as an open-source tool alongside the existing sequential version.
arXiv Detail & Related papers (2021-11-06T16:51:17Z) - BEANNA: A Binary-Enabled Architecture for Neural Network Acceleration [0.0]
This paper proposes and evaluates a neural network hardware accelerator capable of processing both floating point and binary network layers.
Running at a clock speed of 100MHz, BEANNA achieves a peak throughput of 52.8 GigaOps/second.
arXiv Detail & Related papers (2021-08-04T23:17:34Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Inference with Artificial Neural Networks on Analog Neuromorphic
Hardware [0.0]
BrainScaleS-2 ASIC comprises mixed-signal neurons and synapse circuits.
System can also operate in a vector-matrix multiplication and accumulation mode for artificial neural networks.
arXiv Detail & Related papers (2020-06-23T17:25:06Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.