Hyperspectral In-Memory Computing with Optical Frequency Combs and
Programmable Optical Memories
- URL: http://arxiv.org/abs/2310.11014v1
- Date: Tue, 17 Oct 2023 06:03:45 GMT
- Title: Hyperspectral In-Memory Computing with Optical Frequency Combs and
Programmable Optical Memories
- Authors: Mostafa Honari Latifpour, Byoung Jun Park, Yoshihisa Yamamoto,
Myoung-Gyun Suh
- Abstract summary: Machine learning has amplified the demand for extensive matrix-vector multiplication operations.
We propose a hyperspectral in-memory computing architecture that integrates space multiplexing with frequency multiplexing of optical frequency combs.
We have experimentally demonstrated multiply-accumulate operations with higher than 4-bit precision in both matrix-vector and matrix-matrix multiplications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid advancements in machine learning across numerous industries have
amplified the demand for extensive matrix-vector multiplication operations,
thereby challenging the capacities of traditional von Neumann computing
architectures. To address this, researchers are currently exploring
alternatives such as in-memory computing systems to develop faster and more
energy-efficient hardware. In particular, there is renewed interest in
computing systems based on optics, which could potentially handle matrix-vector
multiplication in a more energy-efficient way. Despite promising initial
results, developing a highly parallel, programmable, and scalable optical
computing system capable of rivaling electronic computing hardware still
remains elusive. In this context, we propose a hyperspectral in-memory
computing architecture that integrates space multiplexing with frequency
multiplexing of optical frequency combs and uses spatial light modulators as a
programmable optical memory, thereby boosting the computational throughput and
the energy efficiency. We have experimentally demonstrated multiply-accumulate
operations with higher than 4-bit precision in both matrix-vector and
matrix-matrix multiplications, which suggests the system's potential for a wide
variety of deep learning and optimization tasks. This system exhibits
extraordinary modularity, scalability, and programmability, effectively
transcending the traditional limitations of optics-based computing
architectures. Our approach demonstrates the potential to scale beyond peta
operations per second, marking a significant step towards achieving
high-throughput energy-efficient optical computing.
Related papers
- Optical training of large-scale Transformers and deep neural networks with direct feedback alignment [48.90869997343841]
We experimentally implement a versatile and scalable training algorithm, called direct feedback alignment, on a hybrid electronic-photonic platform.
An optical processing unit performs large-scale random matrix multiplications, which is the central operation of this algorithm, at speeds up to 1500 TeraOps.
We study the compute scaling of our hybrid optical approach, and demonstrate a potential advantage for ultra-deep and wide neural networks.
arXiv Detail & Related papers (2024-09-01T12:48:47Z) - Emulating quantum computing with optical matrix multiplication [0.0]
Optical computing harnesses the speed of light to perform vector-matrix operations efficiently.
We formulate the process of photonic matrix multiplication using quantum mechanical principles.
We demonstrate a well known algorithm, namely the Deutsch-Jozsa's algorithm.
arXiv Detail & Related papers (2024-07-19T10:11:06Z) - Efficient and accurate neural field reconstruction using resistive memory [52.68088466453264]
Traditional signal reconstruction methods on digital computers face both software and hardware challenges.
We propose a systematic approach with software-hardware co-optimizations for signal reconstruction from sparse inputs.
This work advances the AI-driven signal restoration technology and paves the way for future efficient and robust medical AI and 3D vision applications.
arXiv Detail & Related papers (2024-04-15T09:33:09Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Deep Photonic Reservoir Computer for Speech Recognition [49.1574468325115]
Speech recognition is a critical task in the field of artificial intelligence and has witnessed remarkable advancements.
Deep reservoir computing is energy efficient but exhibits limitations in performance when compared to more resource-intensive machine learning algorithms.
We propose a photonic-based deep reservoir computer and evaluate its effectiveness on different speech recognition tasks.
arXiv Detail & Related papers (2023-12-11T17:43:58Z) - In-memory factorization of holographic perceptual representations [14.621617156897301]
Disentanglement of constituent factors of a sensory signal is central to perception and cognition.
We present a compute engine capable of efficiently factorizing holographic perceptual representations.
arXiv Detail & Related papers (2022-11-09T17:36:06Z) - Scalable Optical Learning Operator [0.2399911126932526]
The presented framework overcomes the energy scaling problem of existing systems without classifying speed.
We numerically and experimentally showed the ability of the method to execute several different tasks with accuracy comparable to a digital implementation.
Our results indicate that a powerful supercomputer would be required to duplicate the performance of the multimode fiber-based computer.
arXiv Detail & Related papers (2020-12-22T23:06:59Z) - Rapid characterisation of linear-optical networks via PhaseLift [51.03305009278831]
Integrated photonics offers great phase-stability and can rely on the large scale manufacturability provided by the semiconductor industry.
New devices, based on such optical circuits, hold the promise of faster and energy-efficient computations in machine learning applications.
We present a novel technique to reconstruct the transfer matrix of linear optical networks.
arXiv Detail & Related papers (2020-10-01T16:04:22Z) - Large-scale neuromorphic optoelectronic computing with a reconfigurable
diffractive processing unit [38.898230519968116]
We propose an optoelectronic reconfigurable computing paradigm by constructing a diffractive processing unit.
It can efficiently support different neural networks and achieve a high model complexity with millions of neurons.
Our prototype system built with off-the-shelf optoelectronic components surpasses the performance of state-of-the-art graphics processing units.
arXiv Detail & Related papers (2020-08-26T16:34:58Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.