The Spatial Complexity of Optical Computing and How to Reduce It
- URL: http://arxiv.org/abs/2411.10435v1
- Date: Fri, 15 Nov 2024 18:56:00 GMT
- Title: The Spatial Complexity of Optical Computing and How to Reduce It
- Authors: Yandong Li, Francesco Monticone,
- Abstract summary: How much space is needed to perform a certain function is a fundamental question in optics.
We study the "spatial complexity" of optical computing systems in terms of scaling laws.
We propose a new paradigm for designing optical computing systems: space-efficient neuromorphic optics.
- Score: 12.168520751389622
- License:
- Abstract: Similar to algorithms, which consume time and memory to run, hardware requires resources to function. For devices processing physical waves, implementing operations needs sufficient "space," as dictated by wave physics. How much space is needed to perform a certain function is a fundamental question in optics, with recent research addressing it for given mathematical operations, but not for more general computing tasks, e.g., classification. Inspired by computational complexity theory, we study the "spatial complexity" of optical computing systems in terms of scaling laws - specifically, how their physical dimensions must scale as the dimension of the mathematical operation increases - and propose a new paradigm for designing optical computing systems: space-efficient neuromorphic optics, based on structural sparsity constraints and neural pruning methods motivated by wave physics (notably, the concept of "overlapping nonlocality"). On two mainstream platforms, free-space optics and on-chip integrated photonics, our methods demonstrate substantial size reductions (to 1%-10% the size of conventional designs) with minimal compromise on performance. Our theoretical and computational results reveal a trend of diminishing returns on accuracy as structure dimensions increase, providing a new perspective for interpreting and approaching the ultimate limits of optical computing - a balanced trade-off between device size and accuracy.
Related papers
- Computational metaoptics for imaging [3.105460926371459]
"Computational metaoptics" combines the physical wavefront shaping ability of metasurfaces with advanced computational algorithms to enhance imaging performance beyond conventional limits.
By treating metasurfaces as physical preconditioners and co-designing them with reconstruction algorithms through end-to-end (inverse) design, it is possible to jointly optimize the optical hardware and computational software.
Advanced applications enabled by computational metaoptics are highlighted, including phase imaging and quantum state measurement.
arXiv Detail & Related papers (2024-11-14T02:13:25Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Optical training of large-scale Transformers and deep neural networks with direct feedback alignment [48.90869997343841]
We experimentally implement a versatile and scalable training algorithm, called direct feedback alignment, on a hybrid electronic-photonic platform.
An optical processing unit performs large-scale random matrix multiplications, which is the central operation of this algorithm, at speeds up to 1500 TeraOps.
We study the compute scaling of our hybrid optical approach, and demonstrate a potential advantage for ultra-deep and wide neural networks.
arXiv Detail & Related papers (2024-09-01T12:48:47Z) - Photon Number-Resolving Quantum Reservoir Computing [1.1274582481735098]
We propose a fixed optical network for photonic quantum reservoir computing that is enabled by photon number-resolved detection of the output states.
This significantly reduces the required complexity of the input quantum states while still accessing a high-dimensional Hilbert space.
arXiv Detail & Related papers (2024-02-09T11:28:37Z) - QuATON: Quantization Aware Training of Optical Neurons [0.15320652338704774]
Optical processors, built with "optical neurons", can efficiently perform high-dimensional linear operations at the speed of light.
Such optical processors can now be 3D fabricated, but with a limited precision.
This limitation translates to quantization of learnable parameters in optical neurons, and should be handled during the design of the optical processor.
arXiv Detail & Related papers (2023-10-04T02:18:28Z) - Training neural networks with end-to-end optical backpropagation [1.1602089225841632]
We show how to implement backpropagation, an algorithm for training a neural network, using optical processes.
Our approach is adaptable to various analog platforms, materials, and network structures.
It demonstrates the possibility of constructing neural networks entirely reliant on analog optical processes for both training and inference tasks.
arXiv Detail & Related papers (2023-08-09T21:11:26Z) - Interleaving: Modular architectures for fault-tolerant photonic quantum
computing [50.591267188664666]
Photonic fusion-based quantum computing (FBQC) uses low-loss photonic delays.
We present a modular architecture for FBQC in which these components are combined to form "interleaving modules"
Exploiting the multiplicative power of delays, each module can add thousands of physical qubits to the computational Hilbert space.
arXiv Detail & Related papers (2021-03-15T18:00:06Z) - Scalable Optical Learning Operator [0.2399911126932526]
The presented framework overcomes the energy scaling problem of existing systems without classifying speed.
We numerically and experimentally showed the ability of the method to execute several different tasks with accuracy comparable to a digital implementation.
Our results indicate that a powerful supercomputer would be required to duplicate the performance of the multimode fiber-based computer.
arXiv Detail & Related papers (2020-12-22T23:06:59Z) - Rapid characterisation of linear-optical networks via PhaseLift [51.03305009278831]
Integrated photonics offers great phase-stability and can rely on the large scale manufacturability provided by the semiconductor industry.
New devices, based on such optical circuits, hold the promise of faster and energy-efficient computations in machine learning applications.
We present a novel technique to reconstruct the transfer matrix of linear optical networks.
arXiv Detail & Related papers (2020-10-01T16:04:22Z) - Space-efficient binary optimization for variational computing [68.8204255655161]
We show that it is possible to greatly reduce the number of qubits needed for the Traveling Salesman Problem.
We also propose encoding schemes which smoothly interpolate between the qubit-efficient and the circuit depth-efficient models.
arXiv Detail & Related papers (2020-09-15T18:17:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.