3D Brain Reconstruction by Hierarchical Shape-Perception Network from a
Single Incomplete Image
- URL: http://arxiv.org/abs/2107.11010v1
- Date: Fri, 23 Jul 2021 03:20:42 GMT
- Title: 3D Brain Reconstruction by Hierarchical Shape-Perception Network from a
Single Incomplete Image
- Authors: Bowen Hu, Baiying Lei, Yong Liu, Min Gan, Bingchuan Wang, Shuqiang
Wang
- Abstract summary: A novel hierarchical shape-perception network (HSPN) is proposed to reconstruct the 3D point clouds (PCs) of specific brains.
With the proposed HSPN, 3D shape perception and completion can be achieved spontaneously.
- Score: 20.133967825823312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D shape reconstruction is essential in the navigation of minimally-invasive
and auto robot-guided surgeries whose operating environments are indirect and
narrow, and there have been some works that focused on reconstructing the 3D
shape of the surgical organ through limited 2D information available. However,
the lack and incompleteness of such information caused by intraoperative
emergencies (such as bleeding) and risk control conditions have not been
considered. In this paper, a novel hierarchical shape-perception network (HSPN)
is proposed to reconstruct the 3D point clouds (PCs) of specific brains from
one single incomplete image with low latency. A tree-structured predictor and
several hierarchical attention pipelines are constructed to generate point
clouds that accurately describe the incomplete images and then complete these
point clouds with high quality. Meanwhile, attention gate blocks (AGBs) are
designed to efficiently aggregate geometric local features of incomplete PCs
transmitted by hierarchical attention pipelines and internal features of
reconstructing point clouds. With the proposed HSPN, 3D shape perception and
completion can be achieved spontaneously. Comprehensive results measured by
Chamfer distance and PC-to-PC error demonstrate that the performance of the
proposed HSPN outperforms other competitive methods in terms of qualitative
displays, quantitative experiment, and classification evaluation.
Related papers
- Self-supervised 3D Point Cloud Completion via Multi-view Adversarial Learning [61.14132533712537]
We propose MAL-SPC, a framework that effectively leverages both object-level and category-specific geometric similarities to complete missing structures.
Our MAL-SPC does not require any 3D complete supervision and only necessitates a single partial point cloud for each object.
arXiv Detail & Related papers (2024-07-13T06:53:39Z) - SG-GAN: Fine Stereoscopic-Aware Generation for 3D Brain Point Cloud
Up-sampling from a Single Image [18.30982492742905]
A novel model named stereoscopic-aware graph generative adversarial network (SG-GAN) is proposed to generate fine high-density brain point clouds.
The model shows superior performance in terms of visual quality, objective measurements, and performance in classification.
arXiv Detail & Related papers (2023-05-22T02:42:12Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Towards Confidence-guided Shape Completion for Robotic Applications [6.940242990198]
Deep learning has begun taking traction as effective means of inferring a complete 3D object representation from partial visual data.
We propose an object shape completion method based on an implicit 3D representation providing a confidence value for each reconstructed point.
We experimentally validate our approach by comparing reconstructed shapes with ground truths, and by deploying our shape completion algorithm in a robotic grasping pipeline.
arXiv Detail & Related papers (2022-09-09T13:48:24Z) - NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction [64.36535692191343]
Implicit neural representations have shown compelling results in offline 3D reconstruction and also recently demonstrated the potential for online SLAM systems.
This paper addresses two key challenges: 1) seeking a criterion to measure the quality of the candidate viewpoints for the view planning based on the new representations, and 2) learning the criterion from data that can generalize to different scenes instead of hand-crafting one.
Our method demonstrates significant improvements on various metrics for the rendered image quality and the geometry quality of the reconstructed 3D models when compared with variants using TSDF or reconstruction without view planning.
arXiv Detail & Related papers (2022-07-22T10:05:36Z) - Enforcing connectivity of 3D linear structures using their 2D
projections [54.0598511446694]
We propose to improve the 3D connectivity of our results by minimizing a sum of topology-aware losses on their 2D projections.
This suffices to increase the accuracy and to reduce the annotation effort required to provide the required annotated training data.
arXiv Detail & Related papers (2022-07-14T11:42:18Z) - A Point Cloud Generative Model via Tree-Structured Graph Convolutions
for 3D Brain Shape Reconstruction [31.436531681473753]
It is almost impossible to obtain the intraoperative 3D shape information by using physical methods such as sensor scanning.
In this paper, a general generative adversarial network (GAN) architecture is proposed to reconstruct the 3D point clouds (PCs) of brains by using one single 2D image.
arXiv Detail & Related papers (2021-07-21T07:57:37Z) - PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object
Detection [57.49788100647103]
LiDAR-based 3D object detection is an important task for autonomous driving.
Current approaches suffer from sparse and partial point clouds of distant and occluded objects.
In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions.
arXiv Detail & Related papers (2020-12-18T18:06:43Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.