Computational 3D topographic microscopy from terabytes of data per
sample
- URL: http://arxiv.org/abs/2306.02634v1
- Date: Mon, 5 Jun 2023 07:09:21 GMT
- Title: Computational 3D topographic microscopy from terabytes of data per
sample
- Authors: Kevin C. Zhou, Mark Harfouche, Maxwell Zheng, Joakim J\"onsson, Kyung
Chul Lee, Ron Appel, Paul Reamey, Thomas Doman, Veton Saliu, Gregor
Horstmeyer, and Roarke Horstmeyer
- Abstract summary: We present a large-scale computational 3D topographic microscope that enables 6-gigapixel profilometric 3D imaging at micron-scale resolution.
We developed a self-supervised neural network-based algorithm for 3D reconstruction and stitching that jointly estimates an all-in-focus photometric composite and 3D height map.
To demonstrate the broad utility of our new computational microscope, we applied STARCAM to a variety of decimeter-scale objects.
- Score: 2.4657541547959387
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a large-scale computational 3D topographic microscope that enables
6-gigapixel profilometric 3D imaging at micron-scale resolution across $>$110
cm$^2$ areas over multi-millimeter axial ranges. Our computational microscope,
termed STARCAM (Scanning Topographic All-in-focus Reconstruction with a
Computational Array Microscope), features a parallelized, 54-camera
architecture with 3-axis translation to capture, for each sample of interest, a
multi-dimensional, 2.1-terabyte (TB) dataset, consisting of a total of 224,640
9.4-megapixel images. We developed a self-supervised neural network-based
algorithm for 3D reconstruction and stitching that jointly estimates an
all-in-focus photometric composite and 3D height map across the entire field of
view, using multi-view stereo information and image sharpness as a focal
metric. The memory-efficient, compressed differentiable representation offered
by the neural network effectively enables joint participation of the entire
multi-TB dataset during the reconstruction process. To demonstrate the broad
utility of our new computational microscope, we applied STARCAM to a variety of
decimeter-scale objects, with applications ranging from cultural heritage to
industrial inspection.
Related papers
- Large Spatial Model: End-to-end Unposed Images to Semantic 3D [79.94479633598102]
Large Spatial Model (LSM) processes unposed RGB images directly into semantic radiance fields.
LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward operation.
It can generate versatile label maps by interacting with language at novel viewpoints.
arXiv Detail & Related papers (2024-10-24T17:54:42Z) - GPU-Accelerated RSF Level Set Evolution for Large-Scale Microvascular Segmentation [2.5003043942194236]
We propose a reformulation and implementation of the region-scalable fitting (RSF) level set model.
This makes it amenable to three-dimensional evaluation using both single-instruction multiple data (SIMD) and single-program multiple-data (SPMD) parallel processing.
We tested this 3D parallel RSF approach on multiple data sets acquired using state-of-the-art imaging techniques to acquire microvascular data.
arXiv Detail & Related papers (2024-04-03T15:37:02Z) - Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - Towards Model Generalization for Monocular 3D Object Detection [57.25828870799331]
We present an effective unified camera-generalized paradigm (CGP) for Mono3D object detection.
We also propose the 2D-3D geometry-consistent object scaling strategy (GCOS) to bridge the gap via an instance-level augment.
Our method called DGMono3D achieves remarkable performance on all evaluated datasets and surpasses the SoTA unsupervised domain adaptation scheme.
arXiv Detail & Related papers (2022-05-23T23:05:07Z) - 3D Reconstruction of Curvilinear Structures with Stereo Matching
DeepConvolutional Neural Networks [52.710012864395246]
We propose a fully automated pipeline for both detection and matching of curvilinear structures in stereo pairs.
We mainly focus on 3D reconstruction of dislocations from stereo pairs of TEM images.
arXiv Detail & Related papers (2021-10-14T23:05:47Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - RCNN-SliceNet: A Slice and Cluster Approach for Nuclei Centroid
Detection in Three-Dimensional Fluorescence Microscopy Images [16.377426160171982]
We present a scalable approach for nuclei centroid detection of 3D microscopy volumes.
We describe the RCNN-SliceNet to detect 2D nuclei centroids for each slice of the volume from different directions.
Our proposed method can accurately count and detect the nuclei centroids in a 3D microscopy volume.
arXiv Detail & Related papers (2021-06-29T23:38:29Z) - Programmable 3D snapshot microscopy with Fourier convolutional networks [3.2156268397508314]
3D snapshot microscopy enables volumetric imaging as fast as a camera allows by capturing a 3D volume in a single 2D camera image.
We introduce a class of global kernel Fourier convolutional neural networks which can efficiently integrate the globally mixed information encoded in a 3D snapshot image.
arXiv Detail & Related papers (2021-04-21T16:09:56Z) - Recurrent neural network-based volumetric fluorescence microscopy [0.30586855806896046]
We report a deep learning-based image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope.
Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume.
Recurrent-MZ is demonstrated to increase the depth-of-field of a 63xNA objective lens by approximately 50-fold, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume.
arXiv Detail & Related papers (2020-10-21T06:17:38Z) - Learning to Reconstruct Confocal Microscopy Stacks from Single Light
Field Images [19.24428734909019]
We introduce the LFMNet, a novel neural network architecture inspired by the U-Net design.
It is able to reconstruct with high-accuracy a 112x112x57.6$mu m3$ volume in 50ms given a single light field image of 1287x1287 pixels.
Because of the drastic reduction in scan time and storage space, our setup and method are directly applicable to real-time in vivo 3D microscopy.
arXiv Detail & Related papers (2020-03-24T17:46:03Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.