Scalable Optical Learning Operator
- URL: http://arxiv.org/abs/2012.12404v1
- Date: Tue, 22 Dec 2020 23:06:59 GMT
- Title: Scalable Optical Learning Operator
- Authors: U\u{g}ur Te\u{g}in, Mustafa Y{\i}ld{\i}r{\i}m, \.Ilker O\u{g}uz,
Christophe Moser, Demetri Psaltis
- Abstract summary: The presented framework overcomes the energy scaling problem of existing systems without classifying speed.
We numerically and experimentally showed the ability of the method to execute several different tasks with accuracy comparable to a digital implementation.
Our results indicate that a powerful supercomputer would be required to duplicate the performance of the multimode fiber-based computer.
- Score: 0.2399911126932526
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Today's heavy machine learning tasks are fueled by large datasets. Computing
is performed with power hungry processors whose performance is ultimately
limited by the data transfer to and from memory. Optics is one of the powerful
means of communicating and processing information and there is intense current
interest in optical information processing for realizing high-speed
computations. Here we present and experimentally demonstrate an optical
computing framework based on spatiotemporal effects in multimode fibers for a
range of learning tasks from classifying COVID-19 X-ray lung images and speech
recognition to predicting age from face images. The presented framework
overcomes the energy scaling problem of existing systems without compromising
speed. We leveraged simultaneous, linear, and nonlinear interaction of spatial
modes as a computation engine. We numerically and experimentally showed the
ability of the method to execute several different tasks with accuracy
comparable to a digital implementation. Our results indicate that a powerful
supercomputer would be required to duplicate the performance of the multimode
fiber-based computer.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - An optically accelerated extreme learning machine using hot atomic vapors [0.0]
We present a new design combining the strong and tunable nonlinear properties of a light beam propagating through a hot atomic vapor with an Extreme Learning Machine model.
We numerically and experimentally demonstrate the enhancement of the training using such free-space nonlinear propagation on a MNIST image classification task.
arXiv Detail & Related papers (2024-09-06T14:36:56Z) - Optical training of large-scale Transformers and deep neural networks with direct feedback alignment [48.90869997343841]
We experimentally implement a versatile and scalable training algorithm, called direct feedback alignment, on a hybrid electronic-photonic platform.
An optical processing unit performs large-scale random matrix multiplications, which is the central operation of this algorithm, at speeds up to 1500 TeraOps.
We study the compute scaling of our hybrid optical approach, and demonstrate a potential advantage for ultra-deep and wide neural networks.
arXiv Detail & Related papers (2024-09-01T12:48:47Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Deep Photonic Reservoir Computer for Speech Recognition [49.1574468325115]
Speech recognition is a critical task in the field of artificial intelligence and has witnessed remarkable advancements.
Deep reservoir computing is energy efficient but exhibits limitations in performance when compared to more resource-intensive machine learning algorithms.
We propose a photonic-based deep reservoir computer and evaluate its effectiveness on different speech recognition tasks.
arXiv Detail & Related papers (2023-12-11T17:43:58Z) - Hyperspectral In-Memory Computing with Optical Frequency Combs and
Programmable Optical Memories [0.0]
Machine learning has amplified the demand for extensive matrix-vector multiplication operations.
We propose a hyperspectral in-memory computing architecture that integrates space multiplexing with frequency multiplexing of optical frequency combs.
We have experimentally demonstrated multiply-accumulate operations with higher than 4-bit precision in both matrix-vector and matrix-matrix multiplications.
arXiv Detail & Related papers (2023-10-17T06:03:45Z) - Deep Learning with Passive Optical Nonlinear Mapping [9.177212626554505]
We introduce a design that leverages multiple scattering in a reverberating cavity to passively induce optical nonlinear random mapping.
We show we can perform optical data compression, facilitated by multiple scattering in the cavity, to efficiently compress and retain vital information.
Our findings pave the way for novel algorithms and architectural designs for optical computing.
arXiv Detail & Related papers (2023-07-17T15:15:47Z) - Machine learning at the mesoscale: a computation-dissipation bottleneck [77.34726150561087]
We study a computation-dissipation bottleneck in mesoscopic systems used as input-output devices.
Our framework sheds light on a crucial compromise between information compression, input-output computation and dynamic irreversibility induced by non-reciprocal interactions.
arXiv Detail & Related papers (2023-07-05T15:46:07Z) - Slideflow: Deep Learning for Digital Histopathology with Real-Time
Whole-Slide Visualization [49.62449457005743]
We develop a flexible deep learning library for histopathology called Slideflow.
It supports a broad array of deep learning methods for digital pathology.
It includes a fast whole-slide interface for deploying trained models.
arXiv Detail & Related papers (2023-04-09T02:49:36Z) - Multi-mode fiber reservoir computing overcomes shallow neural networks
classifiers [8.891157811906407]
We recast multi-mode optical fibers into random hardware projectors, transforming an input dataset into a speckled image set.
We find that the hardware operates in a flatter region of the loss landscape when trained on fiber data, which aligns with the current theory of deep neural networks.
arXiv Detail & Related papers (2022-10-10T14:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.