Deep DIH : Statistically Inferred Reconstruction of Digital In-Line
Holography by Deep Learning
- URL: http://arxiv.org/abs/2004.12231v2
- Date: Wed, 24 Jun 2020 22:08:02 GMT
- Title: Deep DIH : Statistically Inferred Reconstruction of Digital In-Line
Holography by Deep Learning
- Authors: Huayu Li, Xiwen Chen, Haiyu Wu, Zaoyi Chi, Christopher Mann, and
Abolfazl Razi
- Abstract summary: Digital in-line holography is commonly used to reconstruct 3D images from 2D holograms for microscopic objects.
In this paper, we propose a novel implementation of autoencoder-based deep learning architecture for single-shot hologram reconstruction.
- Score: 1.4619386068190985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Digital in-line holography is commonly used to reconstruct 3D images from 2D
holograms for microscopic objects. One of the technical challenges that arise
in the signal processing stage is removing the twin image that is caused by the
phase-conjugate wavefront from the recorded holograms. Twin image removal is
typically formulated as a non-linear inverse problem due to the irreversible
scattering process when generating the hologram. Recently, end-to-end deep
learning-based methods have been utilized to reconstruct the object wavefront
(as a surrogate for the 3D structure of the object) directly from a single-shot
in-line digital hologram. However, massive data pairs are required to train
deep learning models for acceptable reconstruction precision. In contrast to
typical image processing problems, well-curated datasets for in-line digital
holography does not exist. Also, the trained model highly influenced by the
morphological properties of the object and hence can vary for different
applications. Therefore, data collection can be prohibitively cumbersome in
practice as a major hindrance to using deep learning for digital holography. In
this paper, we proposed a novel implementation of autoencoder-based deep
learning architecture for single-shot hologram reconstruction solely based on
the current sample without the need for massive datasets to train the model.
The simulations results demonstrate the superior performance of the proposed
method compared to the state of the art single-shot compressive digital in-line
hologram reconstruction method.
Related papers
- OAH-Net: A Deep Neural Network for Hologram Reconstruction of Off-axis Digital Holographic Microscope [5.835347176172883]
We propose a novel reconstruction approach that integrates deep learning with the physical principles of off-axis holography.
Our off-axis hologram network (OAH-Net) retrieves phase and amplitude images with errors that fall within the measurement error range attributable to hardware.
This capability further expands off-axis holography's applications in both biological and medical studies.
arXiv Detail & Related papers (2024-10-17T14:25:18Z) - Single-shot reconstruction of three-dimensional morphology of biological cells in digital holographic microscopy using a physics-driven neural network [6.49455647840014]
We propose a novel deep learning model, named MorpHoloNet, for single-shot of reconstruction 3D morphology.
MorpHoloNet is optimized by minimizing the loss between the simulated and input holograms on the sensor plane.
It enables direct reconstruction of 3D complex light field and 3D morphology of a test sample from its single-shot hologram.
arXiv Detail & Related papers (2024-09-30T07:15:36Z) - Realistic Extreme Image Rescaling via Generative Latent Space Learning [51.85790402171696]
We propose a novel framework called Latent Space Based Image Rescaling (LSBIR) for extreme image rescaling tasks.
LSBIR effectively leverages powerful natural image priors learned by a pre-trained text-to-image diffusion model to generate realistic HR images.
In the first stage, a pseudo-invertible encoder-decoder models the bidirectional mapping between the latent features of the HR image and the target-sized LR image.
In the second stage, the reconstructed features from the first stage are refined by a pre-trained diffusion model to generate more faithful and visually pleasing details.
arXiv Detail & Related papers (2024-08-17T09:51:42Z) - A Deep Learning Method for Simultaneous Denoising and Missing Wedge Reconstruction in Cryogenic Electron Tomography [23.75819355889607]
We propose a deep-learning approach for simultaneous denoising and missing wedge reconstruction called DeepDeWedge.
The algorithm requires no ground truth data and is based on fitting a neural network to the 2D projections using a self-supervised loss.
arXiv Detail & Related papers (2023-11-09T17:34:57Z) - DH-GAN: A Physics-driven Untrained Generative Adversarial Network for 3D
Microscopic Imaging using Digital Holography [3.4635026053111484]
Digital holography is a 3D imaging technique by emitting a laser beam with a plane wavefront to an object and measuring the intensity of the diffracted waveform, called holograms.
Recently, deep learning (DL) methods have been used for more accurate holographic processing.
We propose a new DL architecture based on generative adversarial networks that uses a discriminative network for realizing a semantic measure for reconstruction quality.
arXiv Detail & Related papers (2022-05-25T17:13:45Z) - RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects [68.85305626324694]
Ray-marching in Camera Space (RiCS) is a new method to represent the self-occlusions of foreground objects in 3D into a 2D self-occlusion map.
We show that our representation map not only allows us to enhance the image quality but also to model temporally coherent complex shadow effects.
arXiv Detail & Related papers (2022-05-14T05:35:35Z) - Advantage of Machine Learning over Maximum Likelihood in Limited-Angle
Low-Photon X-Ray Tomography [0.0]
We introduce deep neural networks to determine and apply a prior distribution in the reconstruction process.
Our neural networks learn the prior directly from synthetic training samples.
We demonstrate that, when the projection angles and photon budgets are limited, the priors from our deep generative models can dramatically improve the IC reconstruction quality.
arXiv Detail & Related papers (2021-11-15T16:24:12Z) - Learning to Recover 3D Scene Shape from a Single Image [98.20106822614392]
We propose a two-stage framework that first predicts depth up to an unknown scale and shift from a single monocular image.
We then use 3D point cloud encoders to predict the missing depth shift and focal length that allow us to recover a realistic 3D scene shape.
arXiv Detail & Related papers (2020-12-17T02:35:13Z) - SIR: Self-supervised Image Rectification via Seeing the Same Scene from
Multiple Different Lenses [82.56853587380168]
We propose a novel self-supervised image rectification (SIR) method based on an important insight that the rectified results of distorted images of the same scene from different lens should be the same.
We leverage a differentiable warping module to generate the rectified images and re-distorted images from the distortion parameters.
Our method achieves comparable or even better performance than the supervised baseline method and representative state-of-the-art methods.
arXiv Detail & Related papers (2020-11-30T08:23:25Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.