Massively Parallel Universal Linear Transformations using a
Wavelength-Multiplexed Diffractive Optical Network
- URL: http://arxiv.org/abs/2208.10362v1
- Date: Sat, 13 Aug 2022 07:59:39 GMT
- Title: Massively Parallel Universal Linear Transformations using a
Wavelength-Multiplexed Diffractive Optical Network
- Authors: Jingxi Li, Bijie Bai, Yi Luo, Aydogan Ozcan
- Abstract summary: deep learning-based design of a massively parallel broadband diffractive neural network for all-optically performing a large group of transformations.
Massively parallel, wavelength-multiplexed diffractive networks will be useful for designing high- throughput intelligent machine vision systems.
- Score: 8.992945252617707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We report deep learning-based design of a massively parallel broadband
diffractive neural network for all-optically performing a large group of
arbitrarily-selected, complex-valued linear transformations between an input
and output field-of-view, each with N_i and N_o pixels, respectively. This
broadband diffractive processor is composed of N_w wavelength channels, each of
which is uniquely assigned to a distinct target transformation. A large set of
arbitrarily-selected linear transformations can be individually performed
through the same diffractive network at different illumination wavelengths,
either simultaneously or sequentially (wavelength scanning). We demonstrate
that such a broadband diffractive network, regardless of its material
dispersion, can successfully approximate N_w unique complex-valued linear
transforms with a negligible error when the number of diffractive neurons (N)
in its design matches or exceeds 2 x N_w x N_i x N_o. We further report that
the spectral multiplexing capability (N_w) can be increased by increasing N;
our numerical analyses confirm these conclusions for N_w > 180, which can be
further increased to e.g., ~2000 depending on the upper bound of the
approximation error. Massively parallel, wavelength-multiplexed diffractive
networks will be useful for designing high-throughput intelligent machine
vision systems and hyperspectral processors that can perform statistical
inference and analyze objects/scenes with unique spectral properties.
Related papers
- Pyramid diffractive optical networks for unidirectional image magnification and demagnification [0.0]
We present a pyramid-structured diffractive optical network design (which we term P-D2NN) for unidirectional image magnification and demagnification.
The P-D2NN design creates high-fidelity magnified or demagnified images in only one direction, while inhibiting the image formation in the opposite direction.
arXiv Detail & Related papers (2023-08-29T04:46:52Z) - Large Reconfigurable Quantum Circuits with SPAD Arrays and Multimode
Fibers [1.5992461683527883]
Integrated optics provides a natural platform for tunable photonic circuits, but faces challenges when high dimensions and high connectivity are involved.
Here, we implement high-dimensional linear transformations on spatial modes of photons using wavefront shaping together with mode mixing in a multimode fiber.
In order to prove the suitability of our approach for quantum technologies we demonstrate two-photon interferences in a tunable complex linear network.
arXiv Detail & Related papers (2023-05-25T16:07:38Z) - Hyper-entanglement between pulse modes and frequency bins [101.18253437732933]
Hyper-entanglement between two or more photonic degrees of freedom (DOF) can enhance and enable new quantum protocols.
We demonstrate the generation of photon pairs hyper-entangled between pulse modes and frequency bins.
arXiv Detail & Related papers (2023-04-24T15:43:08Z) - Universal Linear Intensity Transformations Using Spatially-Incoherent
Diffractive Processors [0.0]
Under spatially-incoherent light, a diffractive optical network can be designed to perform arbitrary complex-valued linear transformations.
We numerically demonstrate that a spatially-incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation.
arXiv Detail & Related papers (2023-03-23T04:51:01Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Polarization Multiplexed Diffractive Computing: All-Optical
Implementation of a Group of Linear Transformations Through a
Polarization-Encoded Diffractive Network [0.0]
We introduce a polarization multiplexed diffractive processor to all-optically perform arbitrary linear transformations.
A single diffractive network can successfully approximate and all-optically implement a group of arbitrarily-selected target transformations.
This processor can find various applications in optical computing and polarization-based machine vision tasks.
arXiv Detail & Related papers (2022-03-25T07:10:47Z) - Complete conversion between one and two photons in nonlinear waveguides
with tailored dispersion [62.997667081978825]
We show theoretically how to control coherent conversion between a narrow-band pump photon and broadband photon pairs in nonlinear optical waveguides.
We reveal that complete deterministic conversion as well as pump-photon revival can be achieved at a finite propagation distance.
arXiv Detail & Related papers (2021-10-06T23:49:44Z) - All-Optical Synthesis of an Arbitrary Linear Transformation Using
Diffractive Surfaces [0.0]
We report the design of diffractive surfaces to all-optically perform arbitrary complex-valued linear transformations between an input (N_i) and output (N_o)
We also consider a deep learning-based design method to optimize the transmission coefficients of diffractive surfaces by using examples of input/output fields corresponding to the target transformation.
Our analyses reveal that if the total number (N) of spatially-engineered diffractive features/neurons is N_i x N_o or larger, both design methods succeed in all-optical implementation of the target transformation, achieving negligible error.
arXiv Detail & Related papers (2021-08-22T20:40:35Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Learning to Beamform in Heterogeneous Massive MIMO Networks [48.62625893368218]
It is well-known problem of finding the optimal beamformers in massive multiple-input multiple-output (MIMO) networks.
We propose a novel deep learning based paper algorithm to address this problem.
arXiv Detail & Related papers (2020-11-08T12:48:06Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.