3D endoscopic depth estimation using 3D surface-aware constraints
- URL: http://arxiv.org/abs/2203.02131v1
- Date: Fri, 4 Mar 2022 04:47:20 GMT
- Title: 3D endoscopic depth estimation using 3D surface-aware constraints
- Authors: Shang Zhao, Ce Wang, Qiyuan Wang, Yanzhe Liu, S Kevin Zhou
- Abstract summary: We show that depth estimation can be reformed from a 3D surface perspective.
We propose a loss function for depth estimation that integrates the surface-aware constraints.
Camera parameters are incorporated into the training pipeline to increase the control and transparency of the depth estimation.
- Score: 16.161276518580262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robotic-assisted surgery allows surgeons to conduct precise surgical
operations with stereo vision and flexible motor control. However, the lack of
3D spatial perception limits situational awareness during procedures and
hinders mastering surgical skills in the narrow abdominal space. Depth
estimation, as a representative perception task, is typically defined as an
image reconstruction problem. In this work, we show that depth estimation can
be reformed from a 3D surface perspective. We propose a loss function for depth
estimation that integrates the surface-aware constraints, leading to a faster
and better convergence with the valid information from spatial information. In
addition, camera parameters are incorporated into the training pipeline to
increase the control and transparency of the depth estimation. We also
integrate a specularity removal module to recover more buried image
information. Quantitative experimental results on endoscopic datasets and user
studies with medical professionals demonstrate the effectiveness of our method.
Related papers
- Neural 3D decoding for human vision diagnosis [76.41771117405973]
We show how AI can go beyond the current state of the art by advancing from 2D visuals to visually plausible and functionally more comprehensive 3D visuals decoded from brain signals.
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject who was presented with a 2D image, and yields as output the corresponding 3D object visuals.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - High-fidelity Endoscopic Image Synthesis by Utilizing Depth-guided Neural Surfaces [18.948630080040576]
We introduce a novel method for colon section reconstruction by leveraging NeuS applied to endoscopic images, supplemented by a single frame of depth map.
Our approach demonstrates exceptional accuracy in completely rendering colon sections, even capturing unseen portions of the surface.
This breakthrough opens avenues for achieving stable and consistently scaled reconstructions, promising enhanced quality in cancer screening procedures and treatment interventions.
arXiv Detail & Related papers (2024-04-20T18:06:26Z) - Creating a Digital Twin of Spinal Surgery: A Proof of Concept [68.37190859183663]
Surgery digitalization is the process of creating a virtual replica of real-world surgery.
We present a proof of concept (PoC) for surgery digitalization that is applied to an ex-vivo spinal surgery.
We employ five RGB-D cameras for dynamic 3D reconstruction of the surgeon, a high-end camera for 3D reconstruction of the anatomy, an infrared stereo camera for surgical instrument tracking, and a laser scanner for 3D reconstruction of the operating room and data fusion.
arXiv Detail & Related papers (2024-03-25T13:09:40Z) - Endo-4DGS: Endoscopic Monocular Scene Reconstruction with 4D Gaussian Splatting [12.333523732756163]
Dynamic scene reconstruction can significantly enhance downstream tasks and improve surgical outcomes.
NeRF-based methods have recently risen to prominence for their exceptional ability to reconstruct scenes.
We present Endo-4DGS, a real-time endoscopic dynamic reconstruction approach.
arXiv Detail & Related papers (2024-01-29T18:55:29Z) - Dense 3D Reconstruction Through Lidar: A Comparative Study on Ex-vivo
Porcine Tissue [16.786601606755013]
Researchers are actively investigating depth sensing and 3D reconstruction for vision-based surgical assistance.
It remains difficult to achieve real-time, accurate, and robust 3D representations of the abdominal cavity for minimally invasive surgery.
This work uses quantitative testing on fresh ex-vivo porcine tissue to thoroughly characterize the quality with which a 3D laser-based time-of-flight sensor can perform anatomical surface reconstruction.
arXiv Detail & Related papers (2024-01-19T14:14:26Z) - Next-generation Surgical Navigation: Marker-less Multi-view 6DoF Pose
Estimation of Surgical Instruments [66.74633676595889]
We present a multi-camera capture setup consisting of static and head-mounted cameras.
Second, we publish a multi-view RGB-D video dataset of ex-vivo spine surgeries, captured in a surgical wet lab and a real operating theatre.
Third, we evaluate three state-of-the-art single-view and multi-view methods for the task of 6DoF pose estimation of surgical instruments.
arXiv Detail & Related papers (2023-05-05T13:42:19Z) - Stereo Dense Scene Reconstruction and Accurate Laparoscope Localization
for Learning-Based Navigation in Robot-Assisted Surgery [37.14020061063255]
The computation of anatomical information and laparoscope position is a fundamental block of robot-assisted surgical navigation in Minimally Invasive Surgery (MIS)
We propose a learning-driven framework, in which an image-guided laparoscopic localization with 3D reconstructions of complex anatomical structures is hereby achieved.
arXiv Detail & Related papers (2021-10-08T06:12:18Z) - Self-Supervised Depth Completion for Active Stereo [55.79929735390945]
Active stereo systems are widely used in the robotics industry due to their low cost and high quality depth maps.
These depth sensors suffer from stereo artefacts and do not provide dense depth estimates.
We present the first self-supervised depth completion method for active stereo systems that predicts accurate dense depth maps.
arXiv Detail & Related papers (2021-10-07T07:33:52Z) - Adversarial Domain Feature Adaptation for Bronchoscopic Depth Estimation [111.89519571205778]
In this work, we propose an alternative domain-adaptive approach to depth estimation.
Our novel two-step structure first trains a depth estimation network with labeled synthetic images in a supervised manner.
The results of our experiments show that the proposed method improves the network's performance on real images by a considerable margin.
arXiv Detail & Related papers (2021-09-24T08:11:34Z) - 3-Dimensional Deep Learning with Spatial Erasing for Unsupervised
Anomaly Segmentation in Brain MRI [55.97060983868787]
We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance.
We compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance.
Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE.
arXiv Detail & Related papers (2021-09-14T09:17:27Z) - Self-Supervised Generative Adversarial Network for Depth Estimation in
Laparoscopic Images [13.996932179049978]
We propose SADepth, a new self-supervised depth estimation method based on Generative Adversarial Networks.
It consists of an encoder-decoder generator and a discriminator to incorporate geometry constraints during training.
Experiments on two public datasets show that SADepth outperforms recent state-of-the-art unsupervised methods by a large margin.
arXiv Detail & Related papers (2021-07-09T19:40:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.