Genetically programmable optical random neural networks
- URL: http://arxiv.org/abs/2403.12490v1
- Date: Tue, 19 Mar 2024 06:55:59 GMT
- Title: Genetically programmable optical random neural networks
- Authors: Bora Çarpınlıoğlu, Bahrem Serhat Daniş, Uğur Teğin,
- Abstract summary: We demonstrate a genetically programmable yet simple optical neural network to achieve high performances with optical random projection.
By genetically programming the orientation of the scattering medium which acts as a random projection kernel, our novel technique finds an optimum kernel and improves its initial test accuracies 7-22%.
Our optical computing method presents a promising approach to achieve high performance in optical neural networks with a simple and scalable design.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Today, machine learning tools, particularly artificial neural networks, have become crucial for diverse applications. However, current digital computing tools to train and deploy artificial neural networks often struggle with massive data sizes and high power consumptions. Optical computing provides inherent parallelism and perform fundamental operations with passive optical components. However, most of the optical computing platforms suffer from relatively low accuracies for machine learning tasks due to fixed connections while avoiding complex and sensitive techniques. Here, we demonstrate a genetically programmable yet simple optical neural network to achieve high performances with optical random projection. By genetically programming the orientation of the scattering medium which acts as a random projection kernel and only using 1% of the search space, our novel technique finds an optimum kernel and improves its initial test accuracies 7-22% for various machine learning tasks. Our optical computing method presents a promising approach to achieve high performance in optical neural networks with a simple and scalable design.
Related papers
- Optical training of large-scale Transformers and deep neural networks with direct feedback alignment [48.90869997343841]
We experimentally implement a versatile and scalable training algorithm, called direct feedback alignment, on a hybrid electronic-photonic platform.
An optical processing unit performs large-scale random matrix multiplications, which is the central operation of this algorithm, at speeds up to 1500 TeraOps.
We study the compute scaling of our hybrid optical approach, and demonstrate a potential advantage for ultra-deep and wide neural networks.
arXiv Detail & Related papers (2024-09-01T12:48:47Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Training neural networks with end-to-end optical backpropagation [1.1602089225841632]
We show how to implement backpropagation, an algorithm for training a neural network, using optical processes.
Our approach is adaptable to various analog platforms, materials, and network structures.
It demonstrates the possibility of constructing neural networks entirely reliant on analog optical processes for both training and inference tasks.
arXiv Detail & Related papers (2023-08-09T21:11:26Z) - Hybrid training of optical neural networks [1.0323063834827415]
Optical neural networks are emerging as a promising type of machine learning hardware.
These networks are mainly developed to perform optical inference after in silico training on digital simulators.
We show that hybrid training of optical neural networks can be applied to a wide variety of optical neural networks.
arXiv Detail & Related papers (2022-03-20T21:16:42Z) - An optical neural network using less than 1 photon per multiplication [4.003843776219224]
We experimentally demonstrate an optical neural network achieving 99% accuracy on handwritten-digit classification.
This performance was achieved using a custom free-space optical processor.
Our results provide a proof-of-principle for low-optical-power operation.
arXiv Detail & Related papers (2021-04-27T20:43:23Z) - Photonics for artificial intelligence and neuromorphic computing [52.77024349608834]
Photonic integrated circuits have enabled ultrafast artificial neural networks.
Photonic neuromorphic systems offer sub-nanosecond latencies.
These systems could address the growing demand for machine learning and artificial intelligence.
arXiv Detail & Related papers (2020-10-30T21:41:44Z) - Light-in-the-loop: using a photonics co-processor for scalable training
of neural networks [21.153688679957337]
We present the first optical co-processor able to accelerate the training phase of digitally-implemented neural networks.
We demonstrate its use to train a neural network for handwritten digits recognition.
arXiv Detail & Related papers (2020-06-02T09:19:45Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.