Deep learning-based virtual refocusing of images using an engineered
point-spread function
- URL: http://arxiv.org/abs/2012.11892v1
- Date: Tue, 22 Dec 2020 09:15:26 GMT
- Title: Deep learning-based virtual refocusing of images using an engineered
point-spread function
- Authors: Xilin Yang, Luzhe Huang, Yilin Luo, Yichen Wu, Hongda Wang, Yair
Rivenson, and Aydogan Ozcan
- Abstract summary: We present a virtual image refocusing method over an extended depth of field (DOF) enabled by cascaded neural networks and a double-helix point-spread function (DH-PSF)
We extend the DOF of a fluorescence microscope by 20-fold.
- Score: 1.2977570993112095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a virtual image refocusing method over an extended depth of field
(DOF) enabled by cascaded neural networks and a double-helix point-spread
function (DH-PSF). This network model, referred to as W-Net, is composed of two
cascaded generator and discriminator network pairs. The first generator network
learns to virtually refocus an input image onto a user-defined plane, while the
second generator learns to perform a cross-modality image transformation,
improving the lateral resolution of the output image. Using this W-Net model
with DH-PSF engineering, we extend the DOF of a fluorescence microscope by
~20-fold. This approach can be applied to develop deep learning-enabled image
reconstruction methods for localization microscopy techniques that utilize
engineered PSFs to improve their imaging performance, including spatial
resolution and volumetric imaging throughput.
Related papers
- CWT-Net: Super-resolution of Histopathology Images Using a Cross-scale Wavelet-based Transformer [15.930878163092983]
Super-resolution (SR) aims to enhance the quality of low-resolution images and has been widely applied in medical imaging.
We propose a novel network called CWT-Net, which leverages cross-scale image wavelet transform and Transformer architecture.
Our model significantly outperforms state-of-the-art methods in both performance and visualization evaluations.
arXiv Detail & Related papers (2024-09-11T08:26:28Z) - Pixel-Aligned Multi-View Generation with Depth Guided Decoder [86.1813201212539]
We propose a novel method for pixel-level image-to-multi-view generation.
Unlike prior work, we incorporate attention layers across multi-view images in the VAE decoder of a latent video diffusion model.
Our model enables better pixel alignment across multi-view images.
arXiv Detail & Related papers (2024-08-26T04:56:41Z) - Deep Linear Array Pushbroom Image Restoration: A Degradation Pipeline
and Jitter-Aware Restoration Network [26.86292926584254]
Linear Array Pushbroom (LAP) imaging technology is widely used in the realm of remote sensing.
Traditional methods for restoring LAP images, such as algorithms estimating the point spread function (PSF), exhibit limited performance.
We propose a Jitter-Aware Restoration Network (JARNet) to remove the distortion and blur in two stages.
arXiv Detail & Related papers (2024-01-16T07:26:26Z) - Pixel-Inconsistency Modeling for Image Manipulation Localization [63.54342601757723]
Digital image forensics plays a crucial role in image authentication and manipulation localization.
This paper presents a generalized and robust manipulation localization model through the analysis of pixel inconsistency artifacts.
Experiments show that our method successfully extracts inherent pixel-inconsistency forgery fingerprints.
arXiv Detail & Related papers (2023-09-30T02:54:51Z) - Passive superresolution imaging of incoherent objects [63.942632088208505]
Method consists of measuring the field's spatial mode components in the image plane in the overcomplete basis of Hermite-Gaussian modes and their superpositions.
Deep neural network is used to reconstruct the object from these measurements.
arXiv Detail & Related papers (2023-04-19T15:53:09Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - A Model-data-driven Network Embedding Multidimensional Features for
Tomographic SAR Imaging [5.489791364472879]
We propose a new model-data-driven network to achieve tomoSAR imaging based on multi-dimensional features.
We add two 2D processing modules, both convolutional encoder-decoder structures, to enhance multi-dimensional features of the imaging scene effectively.
Compared with the conventional CS-based FISTA method and DL-based gamma-Net method, the result of our proposed method has better performance on completeness while having decent imaging accuracy.
arXiv Detail & Related papers (2022-11-28T02:01:43Z) - DELAD: Deep Landweber-guided deconvolution with Hessian and sparse prior [0.22940141855172028]
We present a model for non-blind image deconvolution that incorporates the classic iterative method into a deep learning application.
We build our network based on the iterative Landweber deconvolution algorithm, which is integrated with trainable convolutional layers to enhance the recovered image structures and details.
arXiv Detail & Related papers (2022-09-30T11:15:03Z) - LWGNet: Learned Wirtinger Gradients for Fourier Ptychographic Phase
Retrieval [14.588976801396576]
We propose a hybrid model-driven residual network that combines the knowledge of the forward imaging system with a deep data-driven network.
Unlike other conventional unrolling techniques, LWGNet uses fewer stages while performing at par or even better than existing traditional and deep learning techniques.
This improvement in performance for low-bit depth and low-cost sensors has the potential to bring down the cost of FPM imaging setup significantly.
arXiv Detail & Related papers (2022-08-08T17:22:54Z) - VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction [71.83308989022635]
In this paper, we advocate that replicating the traditional two stages framework with deep neural networks improves both the interpretability and the accuracy of the results.
Our network operates in two steps: 1) the local computation of the local depth maps with a deep MVS technique, and, 2) the depth maps and images' features fusion to build a single TSDF volume.
In order to improve the matching performance between images acquired from very different viewpoints, we introduce a rotation-invariant 3D convolution kernel called PosedConv.
arXiv Detail & Related papers (2021-08-19T11:33:58Z) - Single Image Brightening via Multi-Scale Exposure Fusion with Hybrid
Learning [48.890709236564945]
A small ISO and a small exposure time are usually used to capture an image in the back or low light conditions.
In this paper, a single image brightening algorithm is introduced to brighten such an image.
The proposed algorithm includes a unique hybrid learning framework to generate two virtual images with large exposure times.
arXiv Detail & Related papers (2020-07-04T08:23:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.