Universal Linear Intensity Transformations Using Spatially-Incoherent
Diffractive Processors
- URL: http://arxiv.org/abs/2303.13037v1
- Date: Thu, 23 Mar 2023 04:51:01 GMT
- Title: Universal Linear Intensity Transformations Using Spatially-Incoherent
Diffractive Processors
- Authors: Md Sadman Sakib Rahman, Xilin Yang, Jingxi Li, Bijie Bai, Aydogan
Ozcan
- Abstract summary: Under spatially-incoherent light, a diffractive optical network can be designed to perform arbitrary complex-valued linear transformations.
We numerically demonstrate that a spatially-incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Under spatially-coherent light, a diffractive optical network composed of
structured surfaces can be designed to perform any arbitrary complex-valued
linear transformation between its input and output fields-of-view (FOVs) if the
total number (N) of optimizable phase-only diffractive features is greater than
or equal to ~2 Ni x No, where Ni and No refer to the number of useful pixels at
the input and the output FOVs, respectively. Here we report the design of a
spatially-incoherent diffractive optical processor that can approximate any
arbitrary linear transformation in time-averaged intensity between its input
and output FOVs. Under spatially-incoherent monochromatic light, the
spatially-varying intensity point spread functon(H) of a diffractive network,
corresponding to a given, arbitrarily-selected linear intensity transformation,
can be written as H(m,n;m',n')=|h(m,n;m',n')|^2, where h is the
spatially-coherent point-spread function of the same diffractive network, and
(m,n) and (m',n') define the coordinates of the output and input FOVs,
respectively. Using deep learning, supervised through examples of input-output
profiles, we numerically demonstrate that a spatially-incoherent diffractive
network can be trained to all-optically perform any arbitrary linear intensity
transformation between its input and output if N is greater than or equal to ~2
Ni x No. These results constitute the first demonstration of universal linear
intensity transformations performed on an input FOV under spatially-incoherent
illumination and will be useful for designing all-optical visual processors
that can work with incoherent, natural light.
Related papers
- State-Free Inference of State-Space Models: The Transfer Function Approach [132.83348321603205]
State-free inference does not incur any significant memory or computational cost with an increase in state size.
We achieve this using properties of the proposed frequency domain transfer function parametrization.
We report improved perplexity in language modeling over a long convolutional Hyena baseline.
arXiv Detail & Related papers (2024-05-10T00:06:02Z) - Diffusion-based Light Field Synthesis [50.24624071354433]
LFdiff is a diffusion-based generative framework tailored for LF synthesis.
We propose DistgUnet, a disentanglement-based noise estimation network, to harness comprehensive LF representations.
Extensive experiments demonstrate that LFdiff excels in synthesizing visually pleasing and disparity-controllable light fields.
arXiv Detail & Related papers (2024-02-01T13:13:16Z) - Complex-valued universal linear transformations and image encryption
using spatially incoherent diffractive networks [0.0]
As an optical processor, a Diffractive Deep Neural Network (D2NN) utilizes engineered diffractive surfaces designed through machine learning to perform all-optical information processing.
We show that a spatially incoherent diffractive visual processor can approximate any complex-valued linear transformation and be used for all-optical image encryption using incoherent illumination.
arXiv Detail & Related papers (2023-10-05T08:43:59Z) - Shaping Single Photons through Multimode Optical Fibers using Mechanical
Perturbations [55.41644538483948]
We show an all-fiber approach for controlling the shape of single photons and the spatial correlations between entangled photon pairs.
We optimize these perturbations to localize the spatial distribution of a single photon or the spatial correlations of photon pairs in a single spot.
arXiv Detail & Related papers (2023-06-04T07:33:39Z) - D4FT: A Deep Learning Approach to Kohn-Sham Density Functional Theory [79.50644650795012]
We propose a deep learning approach to solve Kohn-Sham Density Functional Theory (KS-DFT)
We prove that such an approach has the same expressivity as the SCF method, yet reduces the computational complexity.
In addition, we show that our approach enables us to explore more complex neural-based wave functions.
arXiv Detail & Related papers (2023-03-01T10:38:10Z) - Massively Parallel Universal Linear Transformations using a
Wavelength-Multiplexed Diffractive Optical Network [8.992945252617707]
deep learning-based design of a massively parallel broadband diffractive neural network for all-optically performing a large group of transformations.
Massively parallel, wavelength-multiplexed diffractive networks will be useful for designing high- throughput intelligent machine vision systems.
arXiv Detail & Related papers (2022-08-13T07:59:39Z) - Polarization Multiplexed Diffractive Computing: All-Optical
Implementation of a Group of Linear Transformations Through a
Polarization-Encoded Diffractive Network [0.0]
We introduce a polarization multiplexed diffractive processor to all-optically perform arbitrary linear transformations.
A single diffractive network can successfully approximate and all-optically implement a group of arbitrarily-selected target transformations.
This processor can find various applications in optical computing and polarization-based machine vision tasks.
arXiv Detail & Related papers (2022-03-25T07:10:47Z) - Adaptive Fourier Neural Operators: Efficient Token Mixers for
Transformers [55.90468016961356]
We propose an efficient token mixer that learns to mix in the Fourier domain.
AFNO is based on a principled foundation of operator learning.
It can handle a sequence size of 65k and outperforms other efficient self-attention mechanisms.
arXiv Detail & Related papers (2021-11-24T05:44:31Z) - All-Optical Synthesis of an Arbitrary Linear Transformation Using
Diffractive Surfaces [0.0]
We report the design of diffractive surfaces to all-optically perform arbitrary complex-valued linear transformations between an input (N_i) and output (N_o)
We also consider a deep learning-based design method to optimize the transmission coefficients of diffractive surfaces by using examples of input/output fields corresponding to the target transformation.
Our analyses reveal that if the total number (N) of spatially-engineered diffractive features/neurons is N_i x N_o or larger, both design methods succeed in all-optical implementation of the target transformation, achieving negligible error.
arXiv Detail & Related papers (2021-08-22T20:40:35Z) - Photonic co-processors in HPC: using LightOn OPUs for Randomized
Numerical Linear Algebra [53.13961454500934]
We show that the randomization step for dimensionality reduction may itself become the computational bottleneck on traditional hardware.
We show that randomization can be significantly accelerated, at negligible precision loss, in a wide range of important RandNLA algorithms.
arXiv Detail & Related papers (2021-04-29T15:48:52Z) - All-Optical Information Processing Capacity of Diffractive Surfaces [0.0]
We analyze the information processing capacity of coherent optical networks formed by diffractive surfaces.
We show that the dimensionality of the all-optical solution space is linearly proportional to the number of diffractive surfaces within the optical network.
Deeper diffractive networks that are composed of larger numbers of trainable surfaces can cover a higher dimensional subspace of the complex-valued linear transformations.
arXiv Detail & Related papers (2020-07-25T00:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.