Analysis of Diffractive Neural Networks for Seeing Through Random
Diffusers
- URL: http://arxiv.org/abs/2205.00428v1
- Date: Sun, 1 May 2022 09:12:24 GMT
- Title: Analysis of Diffractive Neural Networks for Seeing Through Random
Diffusers
- Authors: Yuhang Li, Yi Luo, Bijie Bai, Aydogan Ozcan
- Abstract summary: We provide a computer-free, all-optical imaging method for seeing through random, unknown phase diffusers using diffractive neural networks.
By analyzing various diffractive networks designed to image through random diffusers with different correlation lengths, a trade-off between the image reconstruction fidelity and distortion reduction capability of the diffractive network was observed.
- Score: 15.017918620413585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Imaging through diffusive media is a challenging problem, where the existing
solutions heavily rely on digital computers to reconstruct distorted images. We
provide a detailed analysis of a computer-free, all-optical imaging method for
seeing through random, unknown phase diffusers using diffractive neural
networks, covering different deep learning-based training strategies. By
analyzing various diffractive networks designed to image through random
diffusers with different correlation lengths, a trade-off between the image
reconstruction fidelity and distortion reduction capability of the diffractive
network was observed. During its training, random diffusers with a range of
correlation lengths were used to improve the diffractive network's
generalization performance. Increasing the number of random diffusers used in
each epoch reduced the overfitting of the diffractive network's imaging
performance to known diffusers. We also demonstrated that the use of additional
diffractive layers improved the generalization capability to see through new,
random diffusers. Finally, we introduced deliberate misalignments in training
to 'vaccinate' the network against random layer-to-layer shifts that might
arise due to the imperfect assembly of the diffractive networks. These analyses
provide a comprehensive guide in designing diffractive networks to see through
random diffusers, which might profoundly impact many fields, such as biomedical
imaging, atmospheric physics, and autonomous driving.
Related papers
- A Tunable Despeckling Neural Network Stabilized via Diffusion Equation [15.996302571895045]
Multiplicative Gamma noise remove is a critical research area in the application of synthetic aperture radar (SAR) imaging.
We propose a tunable, regularized neural network that unrolls a denoising unit and a regularization unit into a single network for end-to-end training.
arXiv Detail & Related papers (2024-11-24T17:08:43Z) - Effective Diffusion Transformer Architecture for Image Super-Resolution [63.254644431016345]
We design an effective diffusion transformer for image super-resolution (DiT-SR)
In practice, DiT-SR leverages an overall U-shaped architecture, and adopts a uniform isotropic design for all the transformer blocks.
We analyze the limitation of the widely used AdaLN, and present a frequency-adaptive time-step conditioning module.
arXiv Detail & Related papers (2024-09-29T07:14:16Z) - Neural Network Parameter Diffusion [50.85251415173792]
Diffusion models have achieved remarkable success in image and video generation.
In this work, we demonstrate that diffusion models can also.
generate high-performing neural network parameters.
arXiv Detail & Related papers (2024-02-20T16:59:03Z) - GAN-driven Electromagnetic Imaging of 2-D Dielectric Scatterers [4.510838705378781]
Inverse scattering problems are inherently challenging, given the fact they are ill-posed and nonlinear.
This paper presents a powerful deep learning-based approach that relies on generative adversarial networks.
A cohesive inverse neural network (INN) framework is set up comprising a sequence of appropriately designed dense layers.
The trained INN demonstrates an enhanced robustness, evidenced by a mean binary cross-entropy (BCE) loss of $0.13$ and a structure similarity index (SSI) of $0.90$.
arXiv Detail & Related papers (2024-02-16T17:03:08Z) - Deceptive-NeRF/3DGS: Diffusion-Generated Pseudo-Observations for High-Quality Sparse-View Reconstruction [60.52716381465063]
We introduce Deceptive-NeRF/3DGS to enhance sparse-view reconstruction with only a limited set of input images.
Specifically, we propose a deceptive diffusion model turning noisy images rendered from few-view reconstructions into high-quality pseudo-observations.
Our system progressively incorporates diffusion-generated pseudo-observations into the training image sets, ultimately densifying the sparse input observations by 5 to 10 times.
arXiv Detail & Related papers (2023-05-24T14:00:32Z) - Denoising Diffusion Models for Plug-and-Play Image Restoration [135.6359475784627]
This paper proposes DiffPIR, which integrates the traditional plug-and-play method into the diffusion sampling framework.
Compared to plug-and-play IR methods that rely on discriminative Gaussian denoisers, DiffPIR is expected to inherit the generative ability of diffusion models.
arXiv Detail & Related papers (2023-05-15T20:24:38Z) - DIRE for Diffusion-Generated Image Detection [128.95822613047298]
We propose a novel representation called DIffusion Reconstruction Error (DIRE)
DIRE measures the error between an input image and its reconstruction counterpart by a pre-trained diffusion model.
It provides a hint that DIRE can serve as a bridge to distinguish generated and real images.
arXiv Detail & Related papers (2023-03-16T13:15:03Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - All-optical image classification through unknown random diffusers using
a single-pixel diffractive network [13.7472825798265]
classification of an object behind a random and unknown scattering medium sets a challenging task for computational imaging and machine vision fields.
Recent deep learning-based approaches demonstrated the classification of objects using diffuser-distorted patterns collected by an image sensor.
Here, we present an all-optical processor to directly classify unknown objects through unknown, random phase diffusers using broadband illumination detected with a single pixel.
arXiv Detail & Related papers (2022-08-08T08:26:08Z) - Amplitude-Phase Recombination: Rethinking Robustness of Convolutional
Neural Networks in Frequency Domain [31.182376196295365]
CNN tends to converge at the local optimum which is closely related to the high-frequency components of the training images.
A new perspective on data augmentation designed by re-combing the phase spectrum of the current image and the amplitude spectrum of the distracter image.
arXiv Detail & Related papers (2021-08-19T04:04:41Z) - Misalignment Resilient Diffractive Optical Networks [14.520023891142698]
We introduce and experimentally demonstrate a new training scheme that significantly increases the robustness of diffractive networks against 3D misalignments and fabrication tolerances.
By modeling the undesired layer-to-layer misalignments in 3D as continuous random variables in the optical forward model, diffractive networks are trained to maintain their inference accuracy over a large range of misalignments.
arXiv Detail & Related papers (2020-05-23T04:22:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.