Unidirectional Imaging using Deep Learning-Designed Materials
- URL: http://arxiv.org/abs/2212.02025v1
- Date: Mon, 5 Dec 2022 04:43:03 GMT
- Title: Unidirectional Imaging using Deep Learning-Designed Materials
- Authors: Jingxi Li, Tianyi Gan, Yifan Zhao, Bijie Bai, Che-Yung Shen, Songyu
Sun, Mona Jarrahi, Aydogan Ozcan
- Abstract summary: A unidirectional imager would only permit image formation along one direction, from an input field-of-view (FOV) A to an output FOV B, and in the reverse path.
Here, we report the first demonstration of unidirectional imagers, presenting polarization-insensitive and broadband unidirectional imaging based on successive diffractive layers that are linear and isotropic.
These diffractive layers are optimized using deep learning and consist of hundreds of thousands of diffractive phase features, which collectively modulate the incoming fields and project an intensity image of the input onto an output FOV, while blocking the image formation in the
- Score: 13.048762595058058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A unidirectional imager would only permit image formation along one
direction, from an input field-of-view (FOV) A to an output FOV B, and in the
reverse path, the image formation would be blocked. Here, we report the first
demonstration of unidirectional imagers, presenting polarization-insensitive
and broadband unidirectional imaging based on successive diffractive layers
that are linear and isotropic. These diffractive layers are optimized using
deep learning and consist of hundreds of thousands of diffractive phase
features, which collectively modulate the incoming fields and project an
intensity image of the input onto an output FOV, while blocking the image
formation in the reverse direction. After their deep learning-based training,
the resulting diffractive layers are fabricated to form a unidirectional
imager. As a reciprocal device, the diffractive unidirectional imager has
asymmetric mode processing capabilities in the forward and backward directions,
where the optical modes from B to A are selectively guided/scattered to miss
the output FOV, whereas for the forward direction such modal losses are
minimized, yielding an ideal imaging system between the input and output FOVs.
Although trained using monochromatic illumination, the diffractive
unidirectional imager maintains its functionality over a large spectral band
and works under broadband illumination. We experimentally validated this
unidirectional imager using terahertz radiation, very well matching our
numerical results. Using the same deep learning-based design strategy, we also
created a wavelength-selective unidirectional imager, where two unidirectional
imaging operations, in reverse directions, are multiplexed through different
illumination wavelengths. Diffractive unidirectional imaging using structured
materials will have numerous applications in e.g., security, defense,
telecommunications and privacy protection.
Related papers
- Pixel-Aligned Multi-View Generation with Depth Guided Decoder [86.1813201212539]
We propose a novel method for pixel-level image-to-multi-view generation.
Unlike prior work, we incorporate attention layers across multi-view images in the VAE decoder of a latent video diffusion model.
Our model enables better pixel alignment across multi-view images.
arXiv Detail & Related papers (2024-08-26T04:56:41Z) - Unidirectional imaging with partially coherent light [9.98086643673809]
Unidirectional imagers form images of input objects only in one direction, e.g., from field-of-view (FOV) A to FOV B, while blocking the image formation in the reverse direction.
Here, we report unidirectional imaging under spatially partially coherent light and demonstrate high-quality imaging only in the forward direction.
arXiv Detail & Related papers (2024-08-10T06:01:06Z) - Pyramid diffractive optical networks for unidirectional image magnification and demagnification [0.0]
We present a pyramid-structured diffractive optical network design (which we term P-D2NN) for unidirectional image magnification and demagnification.
The P-D2NN design creates high-fidelity magnified or demagnified images in only one direction, while inhibiting the image formation in the opposite direction.
arXiv Detail & Related papers (2023-08-29T04:46:52Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Multi-Projection Fusion and Refinement Network for Salient Object
Detection in 360{\deg} Omnidirectional Image [141.10227079090419]
We propose a Multi-Projection Fusion and Refinement Network (MPFR-Net) to detect the salient objects in 360deg omnidirectional image.
MPFR-Net uses the equirectangular projection image and four corresponding cube-unfolding images as inputs.
Experimental results on two omnidirectional datasets demonstrate that the proposed approach outperforms the state-of-the-art methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-12-23T14:50:40Z) - A Constrained Deformable Convolutional Network for Efficient Single
Image Dynamic Scene Blind Deblurring with Spatially-Variant Motion Blur
Kernels Estimation [12.744989551644744]
We propose a novel constrained deformable convolutional network (CDCN) for efficient single image dynamic scene blind deblurring.
CDCN simultaneously achieves accurate spatially-variant motion blur kernels estimation and the high-quality image restoration.
arXiv Detail & Related papers (2022-08-23T03:28:21Z) - Distortion-Tolerant Monocular Depth Estimation On Omnidirectional Images
Using Dual-cubemap [37.82642960470551]
We propose a distortion-tolerant omnidirectional depth estimation algorithm using a dual-cubemap.
In DCDE module, we present a rotation-based dual-cubemap model to estimate the accurate NFoV depth.
Then a boundary revision module is designed to smooth the discontinuous boundaries, which contributes to the precise and visually continuous omnidirectional depths.
arXiv Detail & Related papers (2022-03-18T04:20:36Z) - Deep Attentional Guided Image Filtering [90.20699804116646]
Guided filter is a fundamental tool in computer vision and computer graphics.
We propose an effective framework named deep attentional guided image filtering.
We show that the proposed framework compares favorably with the state-of-the-art methods in a wide range of guided image filtering applications.
arXiv Detail & Related papers (2021-12-13T03:26:43Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Deep Learning for Multi-View Ultrasonic Image Fusion [2.1410799064827226]
Delay-And-Sum (DAS) algorithm forms images using the main path on which reflected signals travel back to transducers.
Traditional image fusion techniques typically use ad-hoc combinations of pre-defined image transforms, pooling operations and thresholding.
We propose a deep neural network architecture that directly maps all available data to a segmentation map while explicitly incorporating the DAS image formation for the different insonification paths as network layers.
arXiv Detail & Related papers (2021-09-08T13:04:07Z) - Deep Photometric Stereo for Non-Lambertian Surfaces [89.05501463107673]
We introduce a fully convolutional deep network for calibrated photometric stereo, which we call PS-FCN.
PS-FCN learns the mapping from reflectance observations to surface normal, and is able to handle surfaces with general and unknown isotropic reflectance.
To deal with the uncalibrated scenario where light directions are unknown, we introduce a new convolutional network, named LCNet, to estimate light directions from input images.
arXiv Detail & Related papers (2020-07-26T15:20:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.