Real-time Dense Reconstruction of Tissue Surface from Stereo Optical
Video
- URL: http://arxiv.org/abs/2007.12623v1
- Date: Thu, 16 Jul 2020 19:14:05 GMT
- Title: Real-time Dense Reconstruction of Tissue Surface from Stereo Optical
Video
- Authors: Haoyin Zhou, Jagadeesan Jayender
- Abstract summary: We propose an approach to reconstruct dense three-dimensional (3D) model of tissue surface from stereo optical videos in real-time.
The basic idea is to first extract 3D information from video frames by using stereo matching, and then to mosaic the reconstructed 3D models.
Experimental results on ex- and in vivo data showed that the reconstructed 3D models have high resolution texture with an accuracy error of less than 2 mm.
- Score: 10.181846237133167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an approach to reconstruct dense three-dimensional (3D) model of
tissue surface from stereo optical videos in real-time, the basic idea of which
is to first extract 3D information from video frames by using stereo matching,
and then to mosaic the reconstructed 3D models. To handle the common low
texture regions on tissue surfaces, we propose effective post-processing steps
for the local stereo matching method to enlarge the radius of constraint, which
include outliers removal, hole filling and smoothing. Since the tissue models
obtained by stereo matching are limited to the field of view of the imaging
modality, we propose a model mosaicking method by using a novel feature-based
simultaneously localization and mapping (SLAM) method to align the models. Low
texture regions and the varying illumination condition may lead to a large
percentage of feature matching outliers. To solve this problem, we propose
several algorithms to improve the robustness of SLAM, which mainly include (1)
a histogram voting-based method to roughly select possible inliers from the
feature matching results, (2) a novel 1-point RANSAC-based P$n$P algorithm
called as DynamicR1PP$n$P to track the camera motion and (3) a GPU-based
iterative closest points (ICP) and bundle adjustment (BA) method to refine the
camera motion estimation results. Experimental results on ex- and in vivo data
showed that the reconstructed 3D models have high resolution texture with an
accuracy error of less than 2 mm. Most algorithms are highly parallelized for
GPU computation, and the average runtime for processing one key frame is 76.3
ms on stereo images with 960x540 resolution.
Related papers
- Towards Realistic Example-based Modeling via 3D Gaussian Stitching [31.710954782769377]
We present an example-based modeling method that combines multiple Gaussian fields in a point-based representation using sample-guided synthesis.
Specifically, as for composition, we create a GUI to segment and transform multiple fields in real time, easily obtaining a semantically meaningful composition of models.
For texture blending, due to the discrete and irregular nature of 3DGS, straightforwardly applying gradient propagation as SeamlssNeRF is not supported.
arXiv Detail & Related papers (2024-08-28T11:13:27Z) - SD-MVS: Segmentation-Driven Deformation Multi-View Stereo with Spherical
Refinement and EM optimization [6.886220026399106]
We introduce Multi-View Stereo (SD-MVS) to tackle challenges in 3D reconstruction of textureless areas.
We are the first to adopt the Segment Anything Model (SAM) to distinguish semantic instances in scenes.
We propose a unique refinement strategy that combines spherical coordinates and gradient descent on normals and pixelwise search interval on depths.
arXiv Detail & Related papers (2024-01-12T05:25:57Z) - Wonder3D: Single Image to 3D using Cross-Domain Diffusion [105.16622018766236]
Wonder3D is a novel method for efficiently generating high-fidelity textured meshes from single-view images.
To holistically improve the quality, consistency, and efficiency of image-to-3D tasks, we propose a cross-domain diffusion model.
arXiv Detail & Related papers (2023-10-23T15:02:23Z) - Generalization of pixel-wise phase estimation by CNN and improvement of
phase-unwrapping by MRF optimization for one-shot 3D scan [0.621405559652172]
Active stereo technique using single pattern projection, a.k.a. one-shot 3D scan, have drawn a wide attention from industry, medical purposes, etc.
One severe drawback of one-shot 3D scan is sparse reconstruction.
We propose a pixel-wise technique for one-shot scan, which is applicable to any types of static pattern if the pattern is regular and periodic.
arXiv Detail & Related papers (2023-09-26T10:45:04Z) - Towards Scalable Multi-View Reconstruction of Geometry and Materials [27.660389147094715]
We propose a novel method for joint recovery of camera pose, object geometry and spatially-varying Bidirectional Reflectance Distribution Function (svBRDF) of 3D scenes.
The input are high-resolution RGBD images captured by a mobile, hand-held capture system with point lights for active illumination.
arXiv Detail & Related papers (2023-06-06T15:07:39Z) - $PC^2$: Projection-Conditioned Point Cloud Diffusion for Single-Image 3D
Reconstruction [97.06927852165464]
Reconstructing the 3D shape of an object from a single RGB image is a long-standing and highly challenging problem in computer vision.
We propose a novel method for single-image 3D reconstruction which generates a sparse point cloud via a conditional denoising diffusion process.
arXiv Detail & Related papers (2023-02-21T13:37:07Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Learning Stereopsis from Geometric Synthesis for 6D Object Pose
Estimation [11.999630902627864]
Current monocular-based 6D object pose estimation methods generally achieve less competitive results than RGBD-based methods.
This paper proposes a 3D geometric volume based pose estimation method with a short baseline two-view setting.
Experiments show that our method outperforms state-of-the-art monocular-based methods, and is robust in different objects and scenes.
arXiv Detail & Related papers (2021-09-25T02:55:05Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.