Scalable Optical Learning Operator
- URL: http://arxiv.org/abs/2012.12404v1
- Date: Tue, 22 Dec 2020 23:06:59 GMT
- Title: Scalable Optical Learning Operator
- Authors: U\u{g}ur Te\u{g}in, Mustafa Y{\i}ld{\i}r{\i}m, \.Ilker O\u{g}uz,
Christophe Moser, Demetri Psaltis
- Abstract summary: The presented framework overcomes the energy scaling problem of existing systems without classifying speed.
We numerically and experimentally showed the ability of the method to execute several different tasks with accuracy comparable to a digital implementation.
Our results indicate that a powerful supercomputer would be required to duplicate the performance of the multimode fiber-based computer.
- Score: 0.2399911126932526
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Today's heavy machine learning tasks are fueled by large datasets. Computing
is performed with power hungry processors whose performance is ultimately
limited by the data transfer to and from memory. Optics is one of the powerful
means of communicating and processing information and there is intense current
interest in optical information processing for realizing high-speed
computations. Here we present and experimentally demonstrate an optical
computing framework based on spatiotemporal effects in multimode fibers for a
range of learning tasks from classifying COVID-19 X-ray lung images and speech
recognition to predicting age from face images. The presented framework
overcomes the energy scaling problem of existing systems without compromising
speed. We leveraged simultaneous, linear, and nonlinear interaction of spatial
modes as a computation engine. We numerically and experimentally showed the
ability of the method to execute several different tasks with accuracy
comparable to a digital implementation. Our results indicate that a powerful
supercomputer would be required to duplicate the performance of the multimode
fiber-based computer.
Related papers
- Artificial intelligence optical hardware empowers high-resolution
hyperspectral video understanding at 1.2 Tb/s [53.91923493664551]
This work introduces a hardware-accelerated integrated optoelectronic platform for multidimensional video understanding in real-time.
The technology platform combines artificial intelligence hardware, processing information optically, with state-of-the-art machine vision networks.
Such performance surpasses the speed of the closest technologies with similar spectral resolution by three to four orders of magnitude.
arXiv Detail & Related papers (2023-12-17T07:51:38Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Deep Photonic Reservoir Computer for Speech Recognition [49.1574468325115]
Speech recognition is a critical task in the field of artificial intelligence and has witnessed remarkable advancements.
Deep reservoir computing is energy efficient but exhibits limitations in performance when compared to more resource-intensive machine learning algorithms.
We propose a photonic-based deep reservoir computer and evaluate its effectiveness on different speech recognition tasks.
arXiv Detail & Related papers (2023-12-11T17:43:58Z) - Hyperspectral In-Memory Computing with Optical Frequency Combs and
Programmable Optical Memories [0.0]
Machine learning has amplified the demand for extensive matrix-vector multiplication operations.
We propose a hyperspectral in-memory computing architecture that integrates space multiplexing with frequency multiplexing of optical frequency combs.
We have experimentally demonstrated multiply-accumulate operations with higher than 4-bit precision in both matrix-vector and matrix-matrix multiplications.
arXiv Detail & Related papers (2023-10-17T06:03:45Z) - Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - Deep Learning with Passive Optical Nonlinear Mapping [9.177212626554505]
We introduce a design that leverages multiple scattering in a reverberating cavity to passively induce optical nonlinear random mapping.
We show we can perform optical data compression, facilitated by multiple scattering in the cavity, to efficiently compress and retain vital information.
Our findings pave the way for novel algorithms and architectural designs for optical computing.
arXiv Detail & Related papers (2023-07-17T15:15:47Z) - Machine learning at the mesoscale: a computation-dissipation bottleneck [77.34726150561087]
We study a computation-dissipation bottleneck in mesoscopic systems used as input-output devices.
Our framework sheds light on a crucial compromise between information compression, input-output computation and dynamic irreversibility induced by non-reciprocal interactions.
arXiv Detail & Related papers (2023-07-05T15:46:07Z) - Slideflow: Deep Learning for Digital Histopathology with Real-Time
Whole-Slide Visualization [49.62449457005743]
We develop a flexible deep learning library for histopathology called Slideflow.
It supports a broad array of deep learning methods for digital pathology.
It includes a fast whole-slide interface for deploying trained models.
arXiv Detail & Related papers (2023-04-09T02:49:36Z) - Multi-mode fiber reservoir computing overcomes shallow neural networks
classifiers [8.891157811906407]
We recast multi-mode optical fibers into random hardware projectors, transforming an input dataset into a speckled image set.
We find that the hardware operates in a flatter region of the loss landscape when trained on fiber data, which aligns with the current theory of deep neural networks.
arXiv Detail & Related papers (2022-10-10T14:55:02Z) - All-Photonic Artificial Neural Network Processor Via Non-linear Optics [0.0]
We propose an all-photonic artificial neural network processor.
Information is encoded in the amplitudes of frequency modes that act as neurons.
Our architecture is unique in providing a completely unitary, reversible mode of computation.
arXiv Detail & Related papers (2022-05-17T19:55:30Z) - Large-scale neuromorphic optoelectronic computing with a reconfigurable
diffractive processing unit [38.898230519968116]
We propose an optoelectronic reconfigurable computing paradigm by constructing a diffractive processing unit.
It can efficiently support different neural networks and achieve a high model complexity with millions of neurons.
Our prototype system built with off-the-shelf optoelectronic components surpasses the performance of state-of-the-art graphics processing units.
arXiv Detail & Related papers (2020-08-26T16:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.