Nonlinear Computation with Linear Optics via Source-Position Encoding
- URL: http://arxiv.org/abs/2504.20401v1
- Date: Tue, 29 Apr 2025 03:55:05 GMT
- Title: Nonlinear Computation with Linear Optics via Source-Position Encoding
- Authors: N. Richardson, C. Bosch, R. P. Adams,
- Abstract summary: We introduce a novel method to achieve nonlinear computation in fully linear media.<n>Our method can operate at low power and requires only the ability to drive the optical system at a data-dependent spatial position.<n>We formulate a fully automated, topology-optimization-based hardware design framework for extremely specialized optical neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optical computing systems provide an alternate hardware model which appears to be aligned with the demands of neural network workloads. However, the challenge of implementing energy efficient nonlinearities in optics -- a key requirement for realizing neural networks -- is a conspicuous missing link. In this work we introduce a novel method to achieve nonlinear computation in fully linear media. Our method can operate at low power and requires only the ability to drive the optical system at a data-dependent spatial position. Leveraging this positional encoding, we formulate a fully automated, topology-optimization-based hardware design framework for extremely specialized optical neural networks, drawing on modern advancements in optimization and machine learning. We evaluate our optical designs on machine learning classification tasks: demonstrating significant improvements over linear methods, and competitive performance when compared to standard artificial neural networks.
Related papers
- Training Hybrid Neural Networks with Multimode Optical Nonlinearities Using Digital Twins [2.8479179029634984]
We introduce ultrashort pulse propagation in multimode fibers, which perform large-scale nonlinear transformations.<n>Training the hybrid architecture is achieved through a neural model that differentiably approximates the optical system.<n>Our experimental results achieve state-of-the-art image classification accuracies and simulation fidelity.
arXiv Detail & Related papers (2025-01-14T10:35:18Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Streamlined optical training of large-scale modern deep learning architectures with direct feedback alignment [48.90869997343841]
We experimentally implement a versatile and scalable training algorithm, called direct feedback alignment, on a hybrid electronic-photonic platform.<n>An optical processing unit performs large-scale random matrix multiplications, at speeds up to 1500 TeraOPS under 30 Watts of power.<n>We study the scaling of the training time, and demonstrate a potential advantage of our hybrid opto-electronic approach for ultra-deep and wide neural networks.
arXiv Detail & Related papers (2024-09-01T12:48:47Z) - Training Large-Scale Optical Neural Networks with Two-Pass Forward Propagation [0.0]
This paper addresses the limitations in Optical Neural Networks (ONNs) related to training efficiency, nonlinear function implementation, and large input data processing.
We introduce Two-Pass Forward Propagation, a novel training method that avoids specific nonlinear activation functions by modulating and re-entering error with random noise.
We propose a new way to implement convolutional neural networks using simple neural networks in integrated optical systems.
arXiv Detail & Related papers (2024-08-15T11:27:01Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Genetically programmable optical random neural networks [0.0]
We demonstrate a genetically programmable yet simple optical neural network to achieve high performances with optical random projection.<n>Our novel technique finds an optimum kernel and improves initial test accuracies by 8-41% for various machine learning tasks.
arXiv Detail & Related papers (2024-03-19T06:55:59Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Training neural networks with end-to-end optical backpropagation [1.1602089225841632]
We show how to implement backpropagation, an algorithm for training a neural network, using optical processes.
Our approach is adaptable to various analog platforms, materials, and network structures.
It demonstrates the possibility of constructing neural networks entirely reliant on analog optical processes for both training and inference tasks.
arXiv Detail & Related papers (2023-08-09T21:11:26Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z) - Knowledge Distillation Circumvents Nonlinearity for Optical
Convolutional Neural Networks [4.683612295430957]
We propose a Spectral CNN Linear Counterpart (SCLC) network architecture and develop a Knowledge Distillation (KD) approach to circumvent the need for a nonlinearity.
We show that the KD approach can achieve performance that easily surpasses the standard linear version of a CNN and could approach the performance of the nonlinear network.
arXiv Detail & Related papers (2021-02-26T06:35:34Z) - Rapid characterisation of linear-optical networks via PhaseLift [51.03305009278831]
Integrated photonics offers great phase-stability and can rely on the large scale manufacturability provided by the semiconductor industry.
New devices, based on such optical circuits, hold the promise of faster and energy-efficient computations in machine learning applications.
We present a novel technique to reconstruct the transfer matrix of linear optical networks.
arXiv Detail & Related papers (2020-10-01T16:04:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.