Complex-valued universal linear transformations and image encryption
using spatially incoherent diffractive networks
- URL: http://arxiv.org/abs/2310.03384v1
- Date: Thu, 5 Oct 2023 08:43:59 GMT
- Title: Complex-valued universal linear transformations and image encryption
using spatially incoherent diffractive networks
- Authors: Xilin Yang, Md Sadman Sakib Rahman, Bijie Bai, Jingxi Li, Aydogan
Ozcan
- Abstract summary: As an optical processor, a Diffractive Deep Neural Network (D2NN) utilizes engineered diffractive surfaces designed through machine learning to perform all-optical information processing.
We show that a spatially incoherent diffractive visual processor can approximate any complex-valued linear transformation and be used for all-optical image encryption using incoherent illumination.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As an optical processor, a Diffractive Deep Neural Network (D2NN) utilizes
engineered diffractive surfaces designed through machine learning to perform
all-optical information processing, completing its tasks at the speed of light
propagation through thin optical layers. With sufficient degrees-of-freedom,
D2NNs can perform arbitrary complex-valued linear transformations using
spatially coherent light. Similarly, D2NNs can also perform arbitrary linear
intensity transformations with spatially incoherent illumination; however,
under spatially incoherent light, these transformations are non-negative,
acting on diffraction-limited optical intensity patterns at the input
field-of-view (FOV). Here, we expand the use of spatially incoherent D2NNs to
complex-valued information processing for executing arbitrary complex-valued
linear transformations using spatially incoherent light. Through simulations,
we show that as the number of optimized diffractive features increases beyond a
threshold dictated by the multiplication of the input and output
space-bandwidth products, a spatially incoherent diffractive visual processor
can approximate any complex-valued linear transformation and be used for
all-optical image encryption using incoherent illumination. The findings are
important for the all-optical processing of information under natural light
using various forms of diffractive surface-based optical processors.
Related papers
- Spatial-frequency Dual-Domain Feature Fusion Network for Low-Light Remote Sensing Image Enhancement [49.15531684596958]
We propose a Dual-Domain Feature Fusion Network (DFFN) for low-light remote sensing image enhancement.
The first phase learns amplitude information to restore image brightness, and the second phase learns phase information to refine details.
We have constructed two dark light remote sensing datasets to address the current lack of datasets in dark light remote sensing image enhancement.
arXiv Detail & Related papers (2024-04-26T13:21:31Z) - All-optical modulation with single-photons using electron avalanche [69.65384453064829]
We demonstrate all-optical modulation using a beam with single-photon intensity.
Our approach opens up the possibility of terahertz-speed optical switching at the single-photon level.
arXiv Detail & Related papers (2023-12-18T20:14:15Z) - Shaping Single Photons through Multimode Optical Fibers using Mechanical
Perturbations [55.41644538483948]
We show an all-fiber approach for controlling the shape of single photons and the spatial correlations between entangled photon pairs.
We optimize these perturbations to localize the spatial distribution of a single photon or the spatial correlations of photon pairs in a single spot.
arXiv Detail & Related papers (2023-06-04T07:33:39Z) - Universal Linear Intensity Transformations Using Spatially-Incoherent
Diffractive Processors [0.0]
Under spatially-incoherent light, a diffractive optical network can be designed to perform arbitrary complex-valued linear transformations.
We numerically demonstrate that a spatially-incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation.
arXiv Detail & Related papers (2023-03-23T04:51:01Z) - Time-lapse image classification using a diffractive neural network [0.0]
We show for the first time a time-lapse image classification scheme using a diffractive network.
We show a blind testing accuracy of 62.03% on the optical classification of objects from the CIFAR-10 dataset.
This constitutes the highest inference accuracy achieved so far using a single diffractive network.
arXiv Detail & Related papers (2022-08-23T08:16:30Z) - Tunable directional photon scattering from a pair of superconducting
qubits [105.54048699217668]
In the optical and microwave frequency ranges tunable directionality can be achieved by applying external magnetic fields.
We demonstrate tunable directional scattering with just two transmon qubits coupled to a transmission line.
arXiv Detail & Related papers (2022-05-06T15:21:44Z) - All-Optical Synthesis of an Arbitrary Linear Transformation Using
Diffractive Surfaces [0.0]
We report the design of diffractive surfaces to all-optically perform arbitrary complex-valued linear transformations between an input (N_i) and output (N_o)
We also consider a deep learning-based design method to optimize the transmission coefficients of diffractive surfaces by using examples of input/output fields corresponding to the target transformation.
Our analyses reveal that if the total number (N) of spatially-engineered diffractive features/neurons is N_i x N_o or larger, both design methods succeed in all-optical implementation of the target transformation, achieving negligible error.
arXiv Detail & Related papers (2021-08-22T20:40:35Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Scale-, shift- and rotation-invariant diffractive optical networks [0.0]
Diffractive Deep Neural Networks (D2NNs) harness light-matter interaction over a series of trainable surfaces to compute a desired statistical inference task.
Here, we demonstrate a new training strategy for diffractive networks that introduces input object translation, rotation and/or scaling during the training phase.
This training strategy successfully guides the evolution of the diffractive optical network design towards a solution that is scale-, shift- and rotation-invariant.
arXiv Detail & Related papers (2020-10-24T02:18:39Z) - Rapid characterisation of linear-optical networks via PhaseLift [51.03305009278831]
Integrated photonics offers great phase-stability and can rely on the large scale manufacturability provided by the semiconductor industry.
New devices, based on such optical circuits, hold the promise of faster and energy-efficient computations in machine learning applications.
We present a novel technique to reconstruct the transfer matrix of linear optical networks.
arXiv Detail & Related papers (2020-10-01T16:04:22Z) - All-Optical Information Processing Capacity of Diffractive Surfaces [0.0]
We analyze the information processing capacity of coherent optical networks formed by diffractive surfaces.
We show that the dimensionality of the all-optical solution space is linearly proportional to the number of diffractive surfaces within the optical network.
Deeper diffractive networks that are composed of larger numbers of trainable surfaces can cover a higher dimensional subspace of the complex-valued linear transformations.
arXiv Detail & Related papers (2020-07-25T00:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.