Learning to Reconstruct Confocal Microscopy Stacks from Single Light
Field Images
- URL: http://arxiv.org/abs/2003.11004v1
- Date: Tue, 24 Mar 2020 17:46:03 GMT
- Title: Learning to Reconstruct Confocal Microscopy Stacks from Single Light
Field Images
- Authors: Josue Page, Federico Saltarin, Yury Belyaev, Ruth Lyck, Paolo Favaro
- Abstract summary: We introduce the LFMNet, a novel neural network architecture inspired by the U-Net design.
It is able to reconstruct with high-accuracy a 112x112x57.6$mu m3$ volume in 50ms given a single light field image of 1287x1287 pixels.
Because of the drastic reduction in scan time and storage space, our setup and method are directly applicable to real-time in vivo 3D microscopy.
- Score: 19.24428734909019
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel deep learning approach to reconstruct confocal microscopy
stacks from single light field images. To perform the reconstruction, we
introduce the LFMNet, a novel neural network architecture inspired by the U-Net
design. It is able to reconstruct with high-accuracy a 112x112x57.6$\mu m^3$
volume (1287x1287x64 voxels) in 50ms given a single light field image of
1287x1287 pixels, thus dramatically reducing 720-fold the time for confocal
scanning of assays at the same volumetric resolution and 64-fold the required
storage. To prove the applicability in life sciences, our approach is evaluated
both quantitatively and qualitatively on mouse brain slices with fluorescently
labelled blood vessels. Because of the drastic reduction in scan time and
storage space, our setup and method are directly applicable to real-time in
vivo 3D microscopy. We provide analysis of the optical design, of the network
architecture and of our training procedure to optimally reconstruct volumes for
a given target depth range. To train our network, we built a data set of 362
light field images of mouse brain blood vessels and the corresponding aligned
set of 3D confocal scans, which we use as ground truth. The data set will be
made available for research purposes.
Related papers
- Computational 3D topographic microscopy from terabytes of data per
sample [2.4657541547959387]
We present a large-scale computational 3D topographic microscope that enables 6-gigapixel profilometric 3D imaging at micron-scale resolution.
We developed a self-supervised neural network-based algorithm for 3D reconstruction and stitching that jointly estimates an all-in-focus photometric composite and 3D height map.
To demonstrate the broad utility of our new computational microscope, we applied STARCAM to a variety of decimeter-scale objects.
arXiv Detail & Related papers (2023-06-05T07:09:21Z) - An unsupervised deep learning algorithm for single-site reconstruction
in quantum gas microscopes [47.187609203210705]
In quantum gas microscopy experiments, reconstructing the site-resolved lattice occupation with high fidelity is essential for the accurate extraction of physical observables.
Here, we present a novel algorithm based on deep convolutional neural networks to reconstruct the site-resolved lattice occupation with high fidelity.
arXiv Detail & Related papers (2022-12-22T18:57:27Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Light-field microscopy with correlated beams for extended volumetric
imaging at the diffraction limit [0.0]
We propose and experimentally demonstrate a light-field microscopy architecture based on light intensity correlation.
We demonstrate the effectiveness of our technique in refocusing three-dimensional test targets and biological samples out of the focused plane.
arXiv Detail & Related papers (2021-10-02T13:54:11Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Programmable 3D snapshot microscopy with Fourier convolutional networks [3.2156268397508314]
3D snapshot microscopy enables volumetric imaging as fast as a camera allows by capturing a 3D volume in a single 2D camera image.
We introduce a class of global kernel Fourier convolutional neural networks which can efficiently integrate the globally mixed information encoded in a 3D snapshot image.
arXiv Detail & Related papers (2021-04-21T16:09:56Z) - Model-inspired Deep Learning for Light-Field Microscopy with Application
to Neuron Localization [27.247818386065894]
We propose a model-inspired deep learning approach to perform fast and robust 3D localization of sources using light-field microscopy images.
This is achieved by developing a deep network that efficiently solves a convolutional sparse coding problem.
Experiments on localization of mammalian neurons from light-fields show that the proposed approach simultaneously provides enhanced performance, interpretability and efficiency.
arXiv Detail & Related papers (2021-03-10T16:24:47Z) - Recurrent neural network-based volumetric fluorescence microscopy [0.30586855806896046]
We report a deep learning-based image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope.
Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume.
Recurrent-MZ is demonstrated to increase the depth-of-field of a 63xNA objective lens by approximately 50-fold, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume.
arXiv Detail & Related papers (2020-10-21T06:17:38Z) - 4D Spatio-Temporal Convolutional Networks for Object Position Estimation
in OCT Volumes [69.62333053044712]
3D convolutional neural networks (CNNs) have shown promising performance for pose estimation of a marker object using single OCT images.
We extend 3D CNNs to 4D-temporal CNNs to evaluate the impact of additional temporal information for marker object tracking.
arXiv Detail & Related papers (2020-07-02T12:02:20Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.