Unidirectional imaging with partially coherent light
- URL: http://arxiv.org/abs/2408.05449v1
- Date: Sat, 10 Aug 2024 06:01:06 GMT
- Title: Unidirectional imaging with partially coherent light
- Authors: Guangdong Ma, Che-Yung Shen, Jingxi Li, Luzhe Huang, Cagatay Isil, Fazil Onuralp Ardic, Xilin Yang, Yuhang Li, Yuntian Wang, Md Sadman Sakib Rahman, Aydogan Ozcan,
- Abstract summary: Unidirectional imagers form images of input objects only in one direction, e.g., from field-of-view (FOV) A to FOV B, while blocking the image formation in the reverse direction.
Here, we report unidirectional imaging under spatially partially coherent light and demonstrate high-quality imaging only in the forward direction.
- Score: 9.98086643673809
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unidirectional imagers form images of input objects only in one direction, e.g., from field-of-view (FOV) A to FOV B, while blocking the image formation in the reverse direction, from FOV B to FOV A. Here, we report unidirectional imaging under spatially partially coherent light and demonstrate high-quality imaging only in the forward direction (A->B) with high power efficiency while distorting the image formation in the backward direction (B->A) along with low power efficiency. Our reciprocal design features a set of spatially engineered linear diffractive layers that are statistically optimized for partially coherent illumination with a given phase correlation length. Our analyses reveal that when illuminated by a partially coherent beam with a correlation length of ~1.5 w or larger, where w is the wavelength of light, diffractive unidirectional imagers achieve robust performance, exhibiting asymmetric imaging performance between the forward and backward directions - as desired. A partially coherent unidirectional imager designed with a smaller correlation length of less than 1.5 w still supports unidirectional image transmission, but with a reduced figure of merit. These partially coherent diffractive unidirectional imagers are compact (axially spanning less than 75 w), polarization-independent, and compatible with various types of illumination sources, making them well-suited for applications in asymmetric visual information processing and communication.
Related papers
- LFIC-DRASC: Deep Light Field Image Compression Using Disentangled Representation and Asymmetrical Strip Convolution [51.909036244222904]
We propose an end-to-end deep LF Image Compression method using Disentangled Representation and Asymmetrical Strip Convolution.
Experimental results demonstrate that the proposed LFIC-DRASC achieves an average of 20.5% bit rate reductions.
arXiv Detail & Related papers (2024-09-18T05:33:42Z) - Single-photon description of the lossless optical Y coupler [41.94295877935867]
We derive a unitary scattering matrix for a three-port optical Y-coupler or Y-branch.
Unlike traditional passive linear-optical one-way splitters, coupling light into the conventional output ports of the Y-coupler results in strong coherent back-reflections.
arXiv Detail & Related papers (2024-08-27T15:55:43Z) - Pyramid diffractive optical networks for unidirectional image magnification and demagnification [0.0]
We present a pyramid-structured diffractive optical network design (which we term P-D2NN) for unidirectional image magnification and demagnification.
The P-D2NN design creates high-fidelity magnified or demagnified images in only one direction, while inhibiting the image formation in the opposite direction.
arXiv Detail & Related papers (2023-08-29T04:46:52Z) - Shaping Single Photons through Multimode Optical Fibers using Mechanical
Perturbations [55.41644538483948]
We show an all-fiber approach for controlling the shape of single photons and the spatial correlations between entangled photon pairs.
We optimize these perturbations to localize the spatial distribution of a single photon or the spatial correlations of photon pairs in a single spot.
arXiv Detail & Related papers (2023-06-04T07:33:39Z) - Universal Linear Intensity Transformations Using Spatially-Incoherent
Diffractive Processors [0.0]
Under spatially-incoherent light, a diffractive optical network can be designed to perform arbitrary complex-valued linear transformations.
We numerically demonstrate that a spatially-incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation.
arXiv Detail & Related papers (2023-03-23T04:51:01Z) - Unidirectional Imaging using Deep Learning-Designed Materials [13.048762595058058]
A unidirectional imager would only permit image formation along one direction, from an input field-of-view (FOV) A to an output FOV B, and in the reverse path.
Here, we report the first demonstration of unidirectional imagers, presenting polarization-insensitive and broadband unidirectional imaging based on successive diffractive layers that are linear and isotropic.
These diffractive layers are optimized using deep learning and consist of hundreds of thousands of diffractive phase features, which collectively modulate the incoming fields and project an intensity image of the input onto an output FOV, while blocking the image formation in the
arXiv Detail & Related papers (2022-12-05T04:43:03Z) - Extended Source of Indistinguishable Polarization-entangled Photons over
Wide Angles of Emission [1.160208922584163]
We extend the temporal and spatial indistinguishability of polarization-entangled photons over wide emission angles.
We employ a phase-only two-dimensional spatial light modulator (2D SLM) loaded by the complementary of the relative phase map.
A 97% polarization visibility is verified for the entangled photon pairs scattered widely across the SPDC cone.
arXiv Detail & Related papers (2021-06-13T15:33:23Z) - High-Resolution Optical Flow from 1D Attention and Correlation [89.61824964952949]
We propose a new method for high-resolution optical flow estimation with significantly less computation.
We first perform a 1D attention operation in the vertical direction of the target image, and then a simple 1D correlation in the horizontal direction of the attended image.
Experiments on Sintel, KITTI and real-world 4K resolution images demonstrated the effectiveness and superiority of our proposed method.
arXiv Detail & Related papers (2021-04-28T17:56:34Z) - A Parallel Down-Up Fusion Network for Salient Object Detection in
Optical Remote Sensing Images [82.87122287748791]
We propose a novel Parallel Down-up Fusion network (PDF-Net) for salient object detection in optical remote sensing images (RSIs)
It takes full advantage of the in-path low- and high-level features and cross-path multi-resolution features to distinguish diversely scaled salient objects and suppress the cluttered backgrounds.
Experiments on the ORSSD dataset demonstrate that the proposed network is superior to the state-of-the-art approaches both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-10-02T05:27:57Z) - Correlation Plenoptic Imaging between Arbitrary Planes [52.77024349608834]
We show that the protocol enables to change the focused planes, in post-processing, and to achieve an unprecedented combination of image resolution and depth of field.
Results lead the way towards the development of compact designs for correlation plenoptic imaging devices based on chaotic light, as well as high-SNR plenoptic imaging devices based on entangled photon illumination.
arXiv Detail & Related papers (2020-07-23T14:26:14Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.