Programmable 3D snapshot microscopy with Fourier convolutional networks
- URL: http://arxiv.org/abs/2104.10611v1
- Date: Wed, 21 Apr 2021 16:09:56 GMT
- Title: Programmable 3D snapshot microscopy with Fourier convolutional networks
- Authors: Diptodip Deb, Zhenfei Jiao, Alex B. Chen, Misha B. Ahrens, Kaspar
Podgorski, Srinivas C. Turaga
- Abstract summary: 3D snapshot microscopy enables volumetric imaging as fast as a camera allows by capturing a 3D volume in a single 2D camera image.
We introduce a class of global kernel Fourier convolutional neural networks which can efficiently integrate the globally mixed information encoded in a 3D snapshot image.
- Score: 3.2156268397508314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D snapshot microscopy enables volumetric imaging as fast as a camera allows
by capturing a 3D volume in a single 2D camera image, and has found a variety
of biological applications such as whole brain imaging of fast neural activity
in larval zebrafish. The optimal microscope design for this optical 3D-to-2D
encoding to preserve as much 3D information as possible is generally unknown
and sample-dependent. Highly-programmable optical elements create new
possibilities for sample-specific computational optimization of microscope
parameters, e.g. tuning the collection of light for a given sample structure,
especially using deep learning. This involves a differentiable simulation of
light propagation through the programmable microscope and a neural network to
reconstruct volumes from the microscope image. We introduce a class of global
kernel Fourier convolutional neural networks which can efficiently integrate
the globally mixed information encoded in a 3D snapshot image. We show in
silico that our proposed global Fourier convolutional networks succeed in large
field-of-view volume reconstruction and microscope parameter optimization where
traditional networks fail.
Related papers
- Large Spatial Model: End-to-end Unposed Images to Semantic 3D [79.94479633598102]
Large Spatial Model (LSM) processes unposed RGB images directly into semantic radiance fields.
LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward operation.
It can generate versatile label maps by interacting with language at novel viewpoints.
arXiv Detail & Related papers (2024-10-24T17:54:42Z) - VFMM3D: Releasing the Potential of Image by Vision Foundation Model for Monocular 3D Object Detection [80.62052650370416]
monocular 3D object detection holds significant importance across various applications, including autonomous driving and robotics.
In this paper, we present VFMM3D, an innovative framework that leverages the capabilities of Vision Foundation Models (VFMs) to accurately transform single-view images into LiDAR point cloud representations.
arXiv Detail & Related papers (2024-04-15T03:12:12Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - MinD-3D: Reconstruct High-quality 3D objects in Human Brain [50.534007259536715]
Recon3DMind is an innovative task aimed at reconstructing 3D visuals from Functional Magnetic Resonance Imaging (fMRI) signals.
We present the fMRI-Shape dataset, which includes data from 14 participants and features 360-degree videos of 3D objects.
We propose MinD-3D, a novel and effective three-stage framework specifically designed to decode the brain's 3D visual information from fMRI signals.
arXiv Detail & Related papers (2023-12-12T18:21:36Z) - Fast light-field 3D microscopy with out-of-distribution detection and
adaptation through Conditional Normalizing Flows [16.928404625892625]
Real-time 3D fluorescence microscopy is crucial for the analysis of live organisms.
We propose a novel architecture to perform fast 3D reconstructions of live immobilized zebrafish neural activity.
arXiv Detail & Related papers (2023-06-10T10:42:49Z) - Computational 3D topographic microscopy from terabytes of data per
sample [2.4657541547959387]
We present a large-scale computational 3D topographic microscope that enables 6-gigapixel profilometric 3D imaging at micron-scale resolution.
We developed a self-supervised neural network-based algorithm for 3D reconstruction and stitching that jointly estimates an all-in-focus photometric composite and 3D height map.
To demonstrate the broad utility of our new computational microscope, we applied STARCAM to a variety of decimeter-scale objects.
arXiv Detail & Related papers (2023-06-05T07:09:21Z) - 3D Reconstruction of Curvilinear Structures with Stereo Matching
DeepConvolutional Neural Networks [52.710012864395246]
We propose a fully automated pipeline for both detection and matching of curvilinear structures in stereo pairs.
We mainly focus on 3D reconstruction of dislocations from stereo pairs of TEM images.
arXiv Detail & Related papers (2021-10-14T23:05:47Z) - 3D fluorescence microscopy data synthesis for segmentation and
benchmarking [0.9922927990501083]
Conditional generative adversarial networks can be utilized to generate realistic image data for 3D fluorescence microscopy.
An additional positional conditioning of the cellular structures enables the reconstruction of position-dependent intensity characteristics.
A patch-wise working principle and a subsequent full-size reassemble strategy is used to generate image data of arbitrary size and different organisms.
arXiv Detail & Related papers (2021-07-21T16:08:56Z) - Model-inspired Deep Learning for Light-Field Microscopy with Application
to Neuron Localization [27.247818386065894]
We propose a model-inspired deep learning approach to perform fast and robust 3D localization of sources using light-field microscopy images.
This is achieved by developing a deep network that efficiently solves a convolutional sparse coding problem.
Experiments on localization of mammalian neurons from light-fields show that the proposed approach simultaneously provides enhanced performance, interpretability and efficiency.
arXiv Detail & Related papers (2021-03-10T16:24:47Z) - Recurrent neural network-based volumetric fluorescence microscopy [0.30586855806896046]
We report a deep learning-based image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope.
Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume.
Recurrent-MZ is demonstrated to increase the depth-of-field of a 63xNA objective lens by approximately 50-fold, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume.
arXiv Detail & Related papers (2020-10-21T06:17:38Z) - Global Voxel Transformer Networks for Augmented Microscopy [54.730707387866076]
We introduce global voxel transformer networks (GVTNets), an advanced deep learning tool for augmented microscopy.
GVTNets are built on global voxel transformer operators (GVTOs), which are able to aggregate global information.
We apply the proposed methods on existing datasets for three different augmented microscopy tasks under various settings.
arXiv Detail & Related papers (2020-08-05T20:11:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.