All-Optical Information Processing Capacity of Diffractive Surfaces
- URL: http://arxiv.org/abs/2007.12813v2
- Date: Wed, 18 Nov 2020 03:49:56 GMT
- Title: All-Optical Information Processing Capacity of Diffractive Surfaces
- Authors: Onur Kulce, Deniz Mengu, Yair Rivenson, Aydogan Ozcan
- Abstract summary: We analyze the information processing capacity of coherent optical networks formed by diffractive surfaces.
We show that the dimensionality of the all-optical solution space is linearly proportional to the number of diffractive surfaces within the optical network.
Deeper diffractive networks that are composed of larger numbers of trainable surfaces can cover a higher dimensional subspace of the complex-valued linear transformations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Precise engineering of materials and surfaces has been at the heart of some
of the recent advances in optics and photonics. These advances around the
engineering of materials with new functionalities have also opened up exciting
avenues for designing trainable surfaces that can perform computation and
machine learning tasks through light-matter interaction and diffraction. Here,
we analyze the information processing capacity of coherent optical networks
formed by diffractive surfaces that are trained to perform an all-optical
computational task between a given input and output field-of-view. We show that
the dimensionality of the all-optical solution space covering the
complex-valued transformations between the input and output fields-of-view is
linearly proportional to the number of diffractive surfaces within the optical
network, up to a limit that is dictated by the extent of the input and output
fields-of-view. Deeper diffractive networks that are composed of larger numbers
of trainable surfaces can cover a higher dimensional subspace of the
complex-valued linear transformations between a larger input field-of-view and
a larger output field-of-view, and exhibit depth advantages in terms of their
statistical inference, learning and generalization capabilities for different
image classification tasks, when compared with a single trainable diffractive
surface. These analyses and conclusions are broadly applicable to various forms
of diffractive surfaces, including e.g., plasmonic and/or dielectric-based
metasurfaces and flat optics that can be used to form all-optical processors.
Related papers
- Generalizable Non-Line-of-Sight Imaging with Learnable Physical Priors [52.195637608631955]
Non-line-of-sight (NLOS) imaging has attracted increasing attention due to its potential applications.
Existing NLOS reconstruction approaches are constrained by the reliance on empirical physical priors.
We introduce a novel learning-based solution, comprising two key designs: Learnable Path Compensation (LPC) and Adaptive Phasor Field (APF)
arXiv Detail & Related papers (2024-09-21T04:39:45Z) - All-optical complex field imaging using diffractive processors [12.665552989073797]
We present a complex field imager design that enables snapshot imaging of both the amplitude and quantitative phase information of input fields.
Our design utilizes successive deep learning-optimized diffractive surfaces that are structured to collectively modulate the input complex field.
The intensity distributions of the output fields at these two channels on the sensor plane directly correspond to the amplitude and quantitative phase profiles of the input complex field.
arXiv Detail & Related papers (2024-01-30T06:39:54Z) - Complex-valued universal linear transformations and image encryption
using spatially incoherent diffractive networks [0.0]
As an optical processor, a Diffractive Deep Neural Network (D2NN) utilizes engineered diffractive surfaces designed through machine learning to perform all-optical information processing.
We show that a spatially incoherent diffractive visual processor can approximate any complex-valued linear transformation and be used for all-optical image encryption using incoherent illumination.
arXiv Detail & Related papers (2023-10-05T08:43:59Z) - Surface Geometry Processing: An Efficient Normal-based Detail
Representation [66.69000350849328]
We introduce an efficient surface detail processing framework in 2D normal domain.
We show that the proposed normal-based representation has three important properties, including detail separability, detail transferability and detail idempotence.
Three new schemes are further designed for geometric surface detail processing applications, including geometric texture synthesis, geometry detail transfer, and 3D surface super-resolution.
arXiv Detail & Related papers (2023-07-16T04:46:32Z) - Characterization of multi-mode linear optical networks [0.0]
We formulate efficient procedures for the characterization of optical circuits in the presence of imperfections.
We show the viability of this approach in an experimentally relevant scenario, defined by a tunable integrated photonic circuit.
Our findings can find application in a wide range of optical setups, based both on bulk and integrated configurations.
arXiv Detail & Related papers (2023-04-13T13:09:14Z) - Universal Linear Intensity Transformations Using Spatially-Incoherent
Diffractive Processors [0.0]
Under spatially-incoherent light, a diffractive optical network can be designed to perform arbitrary complex-valued linear transformations.
We numerically demonstrate that a spatially-incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation.
arXiv Detail & Related papers (2023-03-23T04:51:01Z) - Retrieving space-dependent polarization transformations via near-optimal
quantum process tomography [55.41644538483948]
We investigate the application of genetic and machine learning approaches to tomographic problems.
We find that the neural network-based scheme provides a significant speed-up, that may be critical in applications requiring a characterization in real-time.
We expect these results to lay the groundwork for the optimization of tomographic approaches in more general quantum processes.
arXiv Detail & Related papers (2022-10-27T11:37:14Z) - UnProjection: Leveraging Inverse-Projections for Visual Analytics of
High-Dimensional Data [63.74032987144699]
We present NNInv, a deep learning technique with the ability to approximate the inverse of any projection or mapping.
NNInv learns to reconstruct high-dimensional data from any arbitrary point on a 2D projection space, giving users the ability to interact with the learned high-dimensional representation in a visual analytics system.
arXiv Detail & Related papers (2021-11-02T17:11:57Z) - RRNet: Relational Reasoning Network with Parallel Multi-scale Attention
for Salient Object Detection in Optical Remote Sensing Images [82.1679766706423]
Salient object detection (SOD) for optical remote sensing images (RSIs) aims at locating and extracting visually distinctive objects/regions from the optical RSIs.
We propose a relational reasoning network with parallel multi-scale attention for SOD in optical RSIs.
Our proposed RRNet outperforms the existing state-of-the-art SOD competitors both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-10-27T07:18:32Z) - Scale-, shift- and rotation-invariant diffractive optical networks [0.0]
Diffractive Deep Neural Networks (D2NNs) harness light-matter interaction over a series of trainable surfaces to compute a desired statistical inference task.
Here, we demonstrate a new training strategy for diffractive networks that introduces input object translation, rotation and/or scaling during the training phase.
This training strategy successfully guides the evolution of the diffractive optical network design towards a solution that is scale-, shift- and rotation-invariant.
arXiv Detail & Related papers (2020-10-24T02:18:39Z) - Spatial-Angular Attention Network for Light Field Reconstruction [64.27343801968226]
We propose a spatial-angular attention network to perceive correspondences in the light field non-locally.
Motivated by the non-local attention mechanism, a spatial-angular attention module is introduced to compute the responses from all the positions in the epipolar plane for each pixel in the light field.
We then propose a multi-scale reconstruction structure to efficiently implement the non-local attention in the low spatial scale.
arXiv Detail & Related papers (2020-07-05T06:55:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.