Time-lapse image classification using a diffractive neural network
- URL: http://arxiv.org/abs/2208.10802v1
- Date: Tue, 23 Aug 2022 08:16:30 GMT
- Title: Time-lapse image classification using a diffractive neural network
- Authors: Md Sadman Sakib Rahman, Aydogan Ozcan
- Abstract summary: We show for the first time a time-lapse image classification scheme using a diffractive network.
We show a blind testing accuracy of 62.03% on the optical classification of objects from the CIFAR-10 dataset.
This constitutes the highest inference accuracy achieved so far using a single diffractive network.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffractive deep neural networks (D2NNs) define an all-optical computing
framework comprised of spatially engineered passive surfaces that collectively
process optical input information by modulating the amplitude and/or the phase
of the propagating light. Diffractive optical networks complete their
computational tasks at the speed of light propagation through a thin
diffractive volume, without any external computing power while exploiting the
massive parallelism of optics. Diffractive networks were demonstrated to
achieve all-optical classification of objects and perform universal linear
transformations. Here we demonstrate, for the first time, a "time-lapse" image
classification scheme using a diffractive network, significantly advancing its
classification accuracy and generalization performance on complex input objects
by using the lateral movements of the input objects and/or the diffractive
network, relative to each other. In a different context, such relative
movements of the objects and/or the camera are routinely being used for image
super-resolution applications; inspired by their success, we designed a
time-lapse diffractive network to benefit from the complementary information
content created by controlled or random lateral shifts. We numerically explored
the design space and performance limits of time-lapse diffractive networks,
revealing a blind testing accuracy of 62.03% on the optical classification of
objects from the CIFAR-10 dataset. This constitutes the highest inference
accuracy achieved so far using a single diffractive network on the CIFAR-10
dataset. Time-lapse diffractive networks will be broadly useful for the
spatio-temporal analysis of input signals using all-optical processors.
Related papers
- Coherence Awareness in Diffractive Neural Networks [21.264497139730473]
We show that in diffractive networks the degree of spatial coherence has a dramatic effect.
In particular, we show that when the spatial coherence length on the object is comparable to the minimal feature size preserved by the optical system, neither the incoherent nor the coherent extremes serve as acceptable approximations.
arXiv Detail & Related papers (2024-08-13T07:19:40Z) - Spatial-frequency Dual-Domain Feature Fusion Network for Low-Light Remote Sensing Image Enhancement [49.15531684596958]
We propose a Dual-Domain Feature Fusion Network (DFFN) for low-light remote sensing image enhancement.
The first phase learns amplitude information to restore image brightness, and the second phase learns phase information to refine details.
We have constructed two dark light remote sensing datasets to address the current lack of datasets in dark light remote sensing image enhancement.
arXiv Detail & Related papers (2024-04-26T13:21:31Z) - All-optical image classification through unknown random diffusers using
a single-pixel diffractive network [13.7472825798265]
classification of an object behind a random and unknown scattering medium sets a challenging task for computational imaging and machine vision fields.
Recent deep learning-based approaches demonstrated the classification of objects using diffuser-distorted patterns collected by an image sensor.
Here, we present an all-optical processor to directly classify unknown objects through unknown, random phase diffusers using broadband illumination detected with a single pixel.
arXiv Detail & Related papers (2022-08-08T08:26:08Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - Diffractive all-optical computing for quantitative phase imaging [0.0]
We demonstrate a diffractive QPI network that can synthesize the quantitative phase image of an object.
A diffractive QPI network is a specialized all-optical processor designed to perform a quantitative phase-to-intensity transformation.
arXiv Detail & Related papers (2022-01-22T05:28:44Z) - RRNet: Relational Reasoning Network with Parallel Multi-scale Attention
for Salient Object Detection in Optical Remote Sensing Images [82.1679766706423]
Salient object detection (SOD) for optical remote sensing images (RSIs) aims at locating and extracting visually distinctive objects/regions from the optical RSIs.
We propose a relational reasoning network with parallel multi-scale attention for SOD in optical RSIs.
Our proposed RRNet outperforms the existing state-of-the-art SOD competitors both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-10-27T07:18:32Z) - DS-Net: Dynamic Spatiotemporal Network for Video Salient Object
Detection [78.04869214450963]
We propose a novel dynamic temporal-temporal network (DSNet) for more effective fusion of temporal and spatial information.
We show that the proposed method achieves superior performance than state-of-the-art algorithms.
arXiv Detail & Related papers (2020-12-09T06:42:30Z) - Scale-, shift- and rotation-invariant diffractive optical networks [0.0]
Diffractive Deep Neural Networks (D2NNs) harness light-matter interaction over a series of trainable surfaces to compute a desired statistical inference task.
Here, we demonstrate a new training strategy for diffractive networks that introduces input object translation, rotation and/or scaling during the training phase.
This training strategy successfully guides the evolution of the diffractive optical network design towards a solution that is scale-, shift- and rotation-invariant.
arXiv Detail & Related papers (2020-10-24T02:18:39Z) - A Parallel Down-Up Fusion Network for Salient Object Detection in
Optical Remote Sensing Images [82.87122287748791]
We propose a novel Parallel Down-up Fusion network (PDF-Net) for salient object detection in optical remote sensing images (RSIs)
It takes full advantage of the in-path low- and high-level features and cross-path multi-resolution features to distinguish diversely scaled salient objects and suppress the cluttered backgrounds.
Experiments on the ORSSD dataset demonstrate that the proposed network is superior to the state-of-the-art approaches both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-10-02T05:27:57Z) - Ensemble learning of diffractive optical networks [0.0]
We numerically demonstrated that ensembles of N=14 and N=30 D2NNs achieve blind testing accuracies of 61.14% and 62.13%, respectively, on the classification of CIFAR-10 test images.
These results constitute the highest inference accuracies achieved to date by any diffractive optical neural network design on the same dataset.
arXiv Detail & Related papers (2020-09-15T05:02:50Z) - Depthwise Non-local Module for Fast Salient Object Detection Using a
Single Thread [136.2224792151324]
We propose a new deep learning algorithm for fast salient object detection.
The proposed algorithm achieves competitive accuracy and high inference efficiency simultaneously with a single CPU thread.
arXiv Detail & Related papers (2020-01-22T15:23:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.