Efficient Deformable Tissue Reconstruction via Orthogonal Neural Plane
- URL: http://arxiv.org/abs/2312.15253v1
- Date: Sat, 23 Dec 2023 13:27:50 GMT
- Title: Efficient Deformable Tissue Reconstruction via Orthogonal Neural Plane
- Authors: Chen Yang, Kailing Wang, Yuehao Wang, Qi Dou, Xiaokang Yang, Wei Shen
- Abstract summary: We introduce Fast Orthogonal Plane (plane) for the reconstruction of deformable tissues.
We conceptualize surgical procedures as 4D volumes, and break them down into static and dynamic fields comprised of neural planes.
This factorization iscretizes four-dimensional space, leading to a decreased memory usage and faster optimization.
- Score: 58.871015937204255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intraoperative imaging techniques for reconstructing deformable tissues in
vivo are pivotal for advanced surgical systems. Existing methods either
compromise on rendering quality or are excessively computationally intensive,
often demanding dozens of hours to perform, which significantly hinders their
practical application. In this paper, we introduce Fast Orthogonal Plane
(Forplane), a novel, efficient framework based on neural radiance fields (NeRF)
for the reconstruction of deformable tissues. We conceptualize surgical
procedures as 4D volumes, and break them down into static and dynamic fields
comprised of orthogonal neural planes. This factorization iscretizes the
four-dimensional space, leading to a decreased memory usage and faster
optimization. A spatiotemporal importance sampling scheme is introduced to
improve performance in regions with tool occlusion as well as large motions and
accelerate training. An efficient ray marching method is applied to skip
sampling among empty regions, significantly improving inference speed. Forplane
accommodates both binocular and monocular endoscopy videos, demonstrating its
extensive applicability and flexibility. Our experiments, carried out on two in
vivo datasets, the EndoNeRF and Hamlyn datasets, demonstrate the effectiveness
of our framework. In all cases, Forplane substantially accelerates both the
optimization process (by over 100 times) and the inference process (by over 15
times) while maintaining or even improving the quality across a variety of
non-rigid deformations. This significant performance improvement promises to be
a valuable asset for future intraoperative surgical applications. The code of
our project is now available at https://github.com/Loping151/ForPlane.
Related papers
- Deform3DGS: Flexible Deformation for Fast Surgical Scene Reconstruction with Gaussian Splatting [20.147880388740287]
This work presents a novel fast reconstruction framework, termed Deform3DGS, for deformable tissues during endoscopic surgery.
We introduce 3D Gaussian Splatting, an emerging technology in real-time 3D rendering, into surgical scenes by integrating a point cloud.
We also propose a novel flexible deformation modeling scheme (FDM) to learn tissue deformation dynamics at the level of individual Gaussians.
arXiv Detail & Related papers (2024-05-28T05:14:57Z) - FLex: Joint Pose and Dynamic Radiance Fields Optimization for Stereo Endoscopic Videos [79.50191812646125]
Reconstruction of endoscopic scenes is an important asset for various medical applications, from post-surgery analysis to educational training.
We adress the challenging setup of a moving endoscope within a highly dynamic environment of deforming tissue.
We propose an implicit scene separation into multiple overlapping 4D neural radiance fields (NeRFs) and a progressive optimization scheme jointly optimizing for reconstruction and camera poses from scratch.
This improves the ease-of-use and allows to scale reconstruction capabilities in time to process surgical videos of 5,000 frames and more; an improvement of more than ten times compared to the state of the art while being agnostic to external tracking information
arXiv Detail & Related papers (2024-03-18T19:13:02Z) - VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for
Enhanced Indoor View Synthesis [51.49008959209671]
We introduce VoxNeRF, a novel approach that leverages volumetric representations to enhance the quality and efficiency of indoor view synthesis.
We employ multi-resolution hash grids to adaptively capture spatial features, effectively managing occlusions and the intricate geometry of indoor scenes.
We validate our approach against three public indoor datasets and demonstrate that VoxNeRF outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-11-09T11:32:49Z) - Accelerating Multiframe Blind Deconvolution via Deep Learning [0.0]
Ground-based solar image restoration is a computationally expensive procedure.
We propose a new method to accelerate the restoration based on algorithm unrolling.
We show that both methods significantly reduce the restoration time compared to the standard optimization procedure.
arXiv Detail & Related papers (2023-06-21T07:53:00Z) - Neural LerPlane Representations for Fast 4D Reconstruction of Deformable
Tissues [52.886545681833596]
LerPlane is a novel method for fast and accurate reconstruction of surgical scenes under a single-viewpoint setting.
LerPlane treats surgical procedures as 4D volumes and factorizes them into explicit 2D planes of static and dynamic fields.
LerPlane shares static fields, significantly reducing the workload of dynamic tissue modeling.
arXiv Detail & Related papers (2023-05-31T14:38:35Z) - Distributed Neural Representation for Reactive in situ Visualization [23.80657290203846]
Implicit neural representations (INRs) have emerged as a powerful tool for compressing large-scale volume data.
We develop a distributed neural representation and optimize it for in situ visualization.
Our technique eliminates data exchanges between processes, achieving state-of-the-art compression speed, quality and ratios.
arXiv Detail & Related papers (2023-03-28T03:55:47Z) - Fast-SNARF: A Fast Deformer for Articulated Neural Fields [92.68788512596254]
We propose a new articulation module for neural fields, Fast-SNARF, which finds accurate correspondences between canonical space and posed space.
Fast-SNARF is a drop-in replacement in to our previous work, SNARF, while significantly improving its computational efficiency.
Because learning of deformation maps is a crucial component in many 3D human avatar methods, we believe that this work represents a significant step towards the practical creation of 3D virtual humans.
arXiv Detail & Related papers (2022-11-28T17:55:34Z) - E-DSSR: Efficient Dynamic Surgical Scene Reconstruction with
Transformer-based Stereoscopic Depth Perception [15.927060244702686]
We present an efficient reconstruction pipeline for highly dynamic surgical scenes that runs at 28 fps.
Specifically, we design a transformer-based stereoscopic depth perception for efficient depth estimation.
We evaluate the proposed pipeline on two datasets, the public Hamlyn Centre Endoscopic Video dataset and our in-house DaVinci robotic surgery dataset.
arXiv Detail & Related papers (2021-07-01T05:57:41Z) - Trans-SVNet: Accurate Phase Recognition from Surgical Videos via Hybrid
Embedding Aggregation Transformer [57.18185972461453]
We introduce for the first time in surgical workflow analysis Transformer to reconsider the ignored complementary effects of spatial and temporal features for accurate phase recognition.
Our framework is lightweight and processes the hybrid embeddings in parallel to achieve a high inference speed.
arXiv Detail & Related papers (2021-03-17T15:12:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.