Scale-, shift- and rotation-invariant diffractive optical networks
- URL: http://arxiv.org/abs/2010.12747v1
- Date: Sat, 24 Oct 2020 02:18:39 GMT
- Title: Scale-, shift- and rotation-invariant diffractive optical networks
- Authors: Deniz Mengu, Yair Rivenson, Aydogan Ozcan
- Abstract summary: Diffractive Deep Neural Networks (D2NNs) harness light-matter interaction over a series of trainable surfaces to compute a desired statistical inference task.
Here, we demonstrate a new training strategy for diffractive networks that introduces input object translation, rotation and/or scaling during the training phase.
This training strategy successfully guides the evolution of the diffractive optical network design towards a solution that is scale-, shift- and rotation-invariant.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research efforts in optical computing have gravitated towards
developing optical neural networks that aim to benefit from the processing
speed and parallelism of optics/photonics in machine learning applications.
Among these endeavors, Diffractive Deep Neural Networks (D2NNs) harness
light-matter interaction over a series of trainable surfaces, designed using
deep learning, to compute a desired statistical inference task as the light
waves propagate from the input plane to the output field-of-view. Although,
earlier studies have demonstrated the generalization capability of diffractive
optical networks to unseen data, achieving e.g., >98% image classification
accuracy for handwritten digits, these previous designs are in general
sensitive to the spatial scaling, translation and rotation of the input
objects. Here, we demonstrate a new training strategy for diffractive networks
that introduces input object translation, rotation and/or scaling during the
training phase as uniformly distributed random variables to build resilience in
their blind inference performance against such object transformations. This
training strategy successfully guides the evolution of the diffractive optical
network design towards a solution that is scale-, shift- and
rotation-invariant, which is especially important and useful for dynamic
machine vision applications in e.g., autonomous cars, in-vivo imaging of
biomedical specimen, among others.
Related papers
- Optical training of large-scale Transformers and deep neural networks with direct feedback alignment [48.90869997343841]
We experimentally implement a versatile and scalable training algorithm, called direct feedback alignment, on a hybrid electronic-photonic platform.
An optical processing unit performs large-scale random matrix multiplications, which is the central operation of this algorithm, at speeds up to 1500 TeraOps.
We study the compute scaling of our hybrid optical approach, and demonstrate a potential advantage for ultra-deep and wide neural networks.
arXiv Detail & Related papers (2024-09-01T12:48:47Z) - Training Large-Scale Optical Neural Networks with Two-Pass Forward Propagation [0.0]
This paper addresses the limitations in Optical Neural Networks (ONNs) related to training efficiency, nonlinear function implementation, and large input data processing.
We introduce Two-Pass Forward Propagation, a novel training method that avoids specific nonlinear activation functions by modulating and re-entering error with random noise.
We propose a new way to implement convolutional neural networks using simple neural networks in integrated optical systems.
arXiv Detail & Related papers (2024-08-15T11:27:01Z) - Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy [75.15685966213832]
We analyze the rich directional structure of optimization trajectories represented by their pointwise parameters.
We show that training only scalar batchnorm parameters some while into training matches the performance of training the entire network.
arXiv Detail & Related papers (2024-03-12T07:32:47Z) - Training neural networks with end-to-end optical backpropagation [1.1602089225841632]
We show how to implement backpropagation, an algorithm for training a neural network, using optical processes.
Our approach is adaptable to various analog platforms, materials, and network structures.
It demonstrates the possibility of constructing neural networks entirely reliant on analog optical processes for both training and inference tasks.
arXiv Detail & Related papers (2023-08-09T21:11:26Z) - Time-lapse image classification using a diffractive neural network [0.0]
We show for the first time a time-lapse image classification scheme using a diffractive network.
We show a blind testing accuracy of 62.03% on the optical classification of objects from the CIFAR-10 dataset.
This constitutes the highest inference accuracy achieved so far using a single diffractive network.
arXiv Detail & Related papers (2022-08-23T08:16:30Z) - Experimentally realized in situ backpropagation for deep learning in
nanophotonic neural networks [0.7627023515997987]
We design mass-manufacturable silicon photonic neural networks that cascade our custom designed "photonic mesh" accelerator.
We demonstrate in situ backpropagation for the first time to solve classification tasks.
Our findings suggest a new training paradigm for photonics-accelerated artificial intelligence based entirely on a physical analog of the popular backpropagation technique.
arXiv Detail & Related papers (2022-05-17T17:13:50Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - RRNet: Relational Reasoning Network with Parallel Multi-scale Attention
for Salient Object Detection in Optical Remote Sensing Images [82.1679766706423]
Salient object detection (SOD) for optical remote sensing images (RSIs) aims at locating and extracting visually distinctive objects/regions from the optical RSIs.
We propose a relational reasoning network with parallel multi-scale attention for SOD in optical RSIs.
Our proposed RRNet outperforms the existing state-of-the-art SOD competitors both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-10-27T07:18:32Z) - Misalignment Resilient Diffractive Optical Networks [14.520023891142698]
We introduce and experimentally demonstrate a new training scheme that significantly increases the robustness of diffractive networks against 3D misalignments and fabrication tolerances.
By modeling the undesired layer-to-layer misalignments in 3D as continuous random variables in the optical forward model, diffractive networks are trained to maintain their inference accuracy over a large range of misalignments.
arXiv Detail & Related papers (2020-05-23T04:22:48Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.