Super-resolution imaging using super-oscillatory diffractive neural networks
- URL: http://arxiv.org/abs/2406.19126v1
- Date: Thu, 27 Jun 2024 12:16:35 GMT
- Title: Super-resolution imaging using super-oscillatory diffractive neural networks
- Authors: Hang Chen, Sheng Gao, Zejia Zhao, Zhengyang Duan, Haiou Zhang, Gordon Wetzstein, Xing Lin,
- Abstract summary: Super-oscillatory diffractive neural network, i.e., SODNN, can achieve super-resolved spatial resolution for imaging beyond diffraction limit.
SODNN is constructed by utilizing diffractive layers to implement optical interconnections and imaging samples or biological sensors.
Our research work will inspire the development of intelligent optical instruments to facilitate the applications of imaging, sensing, perception, etc.
- Score: 31.825503659600702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical super-oscillation enables far-field super-resolution imaging beyond diffraction limits. However, the existing super-oscillatory lens for the spatial super-resolution imaging system still confronts critical limitations in performance due to the lack of a more advanced design method and the limited design degree of freedom. Here, we propose an optical super-oscillatory diffractive neural network, i.e., SODNN, that can achieve super-resolved spatial resolution for imaging beyond the diffraction limit with superior performance over existing methods. SODNN is constructed by utilizing diffractive layers to implement optical interconnections and imaging samples or biological sensors to implement nonlinearity, which modulates the incident optical field to create optical super-oscillation effects in 3D space and generate the super-resolved focal spots. By optimizing diffractive layers with 3D optical field constraints under an incident wavelength size of $\lambda$, we achieved a super-oscillatory spot with a full width at half maximum of 0.407$\lambda$ in the far field distance over 400$\lambda$ without side-lobes over the field of view, having a long depth of field over 10$\lambda$. Furthermore, the SODNN implements a multi-wavelength and multi-focus spot array that effectively avoids chromatic aberrations. Our research work will inspire the development of intelligent optical instruments to facilitate the applications of imaging, sensing, perception, etc.
Related papers
- "Super-resolution" holographic optical tweezers array [0.0]
We introduce a hologram design method based on the optimisation of a nonlinear cost function using a holographic phase pattern.
We confirm a spot interval of 0.952(1) $mu$m in a $5 times 5$ multispot pattern on the focal plane of a high-numerical-aperture (0.75) objective.
The proposed method is expected to advance laser fabrication, scanning laser microscopy, and cold atom physics, among other fields.
arXiv Detail & Related papers (2024-11-06T00:00:21Z) - Physics-Inspired Degradation Models for Hyperspectral Image Fusion [61.743696362028246]
Most fusion methods solely focus on the fusion algorithm itself and overlook the degradation models.
We propose physics-inspired degradation models (PIDM) to model the degradation of LR-HSI and HR-MSI.
Our proposed PIDM can boost the fusion performance of existing fusion methods in practical scenarios.
arXiv Detail & Related papers (2024-02-04T09:07:28Z) - Phase Guided Light Field for Spatial-Depth High Resolution 3D Imaging [36.208109063579066]
On 3D imaging, light field cameras typically are of single shot, and they heavily suffer from low spatial resolution and depth accuracy.
We propose a phase guided light field algorithm to significantly improve both the spatial and depth resolutions for off-the-shelf light field cameras.
arXiv Detail & Related papers (2023-11-17T15:08:15Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Super-resolution atomic microscopy using orbit angular momentum laser
with temporal modulation [6.518672953447181]
We propose a dark-state-based trapping strategy to break the optical diffraction limit for microscopy.
We utilize a spatially dependent coupling field and a probe laser field with temporal and spatial modulation to interact with three-level atoms.
arXiv Detail & Related papers (2022-09-24T03:55:56Z) - Dual-Stage Approach Toward Hyperspectral Image Super-Resolution [21.68598210467761]
We propose a new structure for hyperspectral image super-resolution (DualSR)
In coarse stage, five bands with high similarity in a certain spectral range are divided into three groups, and the current band is guided to study the potential knowledge.
In fine stage, an enhanced back-projection method via spectral angle constraint is developed to learn the content of spatial-spectral consistency.
arXiv Detail & Related papers (2022-04-09T04:36:44Z) - Ultra-long photonic quantum walks via spin-orbit metasurfaces [52.77024349608834]
We report ultra-long photonic quantum walks across several hundred optical modes, obtained by propagating a light beam through very few closely-stacked liquid-crystal metasurfaces.
With this setup we engineer quantum walks up to 320 discrete steps, far beyond state-of-the-art experiments.
arXiv Detail & Related papers (2022-03-28T19:37:08Z) - Neural Étendue Expander for Ultra-Wide-Angle High-Fidelity Holographic Display [51.399291206537384]
Modern holographic displays possess low 'etendue, which is the product of the display area and the maximum solid angle of diffracted light.
We present neural 'etendue expanders, which are learned from a natural image dataset.
With neural 'etendue expanders, we experimentally achieve 64$times$ 'etendue expansion of natural images in full color, expanding the FOV by an order of magnitude horizontally and vertically.
arXiv Detail & Related papers (2021-09-16T17:21:52Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - Learning Wavefront Coding for Extended Depth of Field Imaging [4.199844472131922]
Extended depth of field (EDoF) imaging is a challenging ill-posed problem.
We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element.
We demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging.
arXiv Detail & Related papers (2019-12-31T17:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.