The role of all-optical neural networks
- URL: http://arxiv.org/abs/2306.06632v2
- Date: Tue, 13 Jun 2023 08:12:20 GMT
- Title: The role of all-optical neural networks
- Authors: Micha{\l} Matuszewski, Adam Prystupiuk, Andrzej Opala
- Abstract summary: All-optical devices will be at an advantage in the case of inference in large neural network models.
We consider the limitations of all-optical neural networks including footprint, strength of nonlinearity, optical signal degradation, limited precision of computations, and quantum noise.
- Score: 2.3204178451683264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In light of recent achievements in optical computing and machine learning, we
consider the conditions under which all-optical computing may surpass
electronic and optoelectronic computing in terms of energy efficiency and
scalability. When considering the performance of a system as a whole, the cost
of memory access and data acquisition is likely to be one of the main
efficiency bottlenecks not only for electronic, but also for optoelectronic and
all-optical devices. However, we predict that all-optical devices will be at an
advantage in the case of inference in large neural network models, and the
advantage will be particularly large in the case of generative models. We also
consider the limitations of all-optical neural networks including footprint,
strength of nonlinearity, optical signal degradation, limited precision of
computations, and quantum noise.
Related papers
- Optical training of large-scale Transformers and deep neural networks with direct feedback alignment [48.90869997343841]
We experimentally implement a versatile and scalable training algorithm, called direct feedback alignment, on a hybrid electronic-photonic platform.
An optical processing unit performs large-scale random matrix multiplications, which is the central operation of this algorithm, at speeds up to 1500 TeraOps.
We study the compute scaling of our hybrid optical approach, and demonstrate a potential advantage for ultra-deep and wide neural networks.
arXiv Detail & Related papers (2024-09-01T12:48:47Z) - Genetically programmable optical random neural networks [0.0]
We demonstrate a genetically programmable yet simple optical neural network to achieve high performances with optical random projection.
By genetically programming the orientation of the scattering medium which acts as a random projection kernel, our novel technique finds an optimum kernel and improves its initial test accuracies 7-22%.
Our optical computing method presents a promising approach to achieve high performance in optical neural networks with a simple and scalable design.
arXiv Detail & Related papers (2024-03-19T06:55:59Z) - All-optical modulation with single-photons using electron avalanche [69.65384453064829]
We demonstrate all-optical modulation using a beam with single-photon intensity.
Our approach opens up the possibility of terahertz-speed optical switching at the single-photon level.
arXiv Detail & Related papers (2023-12-18T20:14:15Z) - Free-Space Optical Spiking Neural Network [0.0]
We introduce the Free-space Optical deep Spiking Convolutional Neural Network (OSCNN)
This novel approach draws inspiration from computational models of the human eye.
Our results demonstrate promising performance with minimal latency and power consumption compared to their electronic ONN counterparts.
arXiv Detail & Related papers (2023-11-08T09:41:14Z) - Training neural networks with end-to-end optical backpropagation [1.1602089225841632]
We show how to implement backpropagation, an algorithm for training a neural network, using optical processes.
Our approach is adaptable to various analog platforms, materials, and network structures.
It demonstrates the possibility of constructing neural networks entirely reliant on analog optical processes for both training and inference tasks.
arXiv Detail & Related papers (2023-08-09T21:11:26Z) - Physics-aware Differentiable Discrete Codesign for Diffractive Optical
Neural Networks [12.952987240366781]
This work proposes a novel device-to-system hardware-software codesign framework, which enables efficient training of Diffractive optical neural networks (DONNs)
Gumbel-Softmax is employed to enable differentiable discrete mapping from real-world device parameters into the forward function of DONNs.
The results have demonstrated that our proposed framework offers significant advantages over conventional quantization-based methods.
arXiv Detail & Related papers (2022-09-28T17:13:28Z) - Single-Shot Optical Neural Network [55.41644538483948]
'Weight-stationary' analog optical and electronic hardware has been proposed to reduce the compute resources required by deep neural networks.
We present a scalable, single-shot-per-layer weight-stationary optical processor.
arXiv Detail & Related papers (2022-05-18T17:49:49Z) - Photonics for artificial intelligence and neuromorphic computing [52.77024349608834]
Photonic integrated circuits have enabled ultrafast artificial neural networks.
Photonic neuromorphic systems offer sub-nanosecond latencies.
These systems could address the growing demand for machine learning and artificial intelligence.
arXiv Detail & Related papers (2020-10-30T21:41:44Z) - Optoelectronic Intelligence [0.0]
For large neural systems capable of general intelligence, the attributes of photonics for communication and electronics for computation are complementary and interdependent.
I sketch a concept for optoelectronic hardware, beginning with synaptic circuits, continuing through wafer-scale integration, and extending to systems interconnected with fiber-optic white matter.
arXiv Detail & Related papers (2020-10-17T01:26:29Z) - Rapid characterisation of linear-optical networks via PhaseLift [51.03305009278831]
Integrated photonics offers great phase-stability and can rely on the large scale manufacturability provided by the semiconductor industry.
New devices, based on such optical circuits, hold the promise of faster and energy-efficient computations in machine learning applications.
We present a novel technique to reconstruct the transfer matrix of linear optical networks.
arXiv Detail & Related papers (2020-10-01T16:04:22Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.