SG-GAN: Fine Stereoscopic-Aware Generation for 3D Brain Point Cloud
Up-sampling from a Single Image
- URL: http://arxiv.org/abs/2305.12646v1
- Date: Mon, 22 May 2023 02:42:12 GMT
- Title: SG-GAN: Fine Stereoscopic-Aware Generation for 3D Brain Point Cloud
Up-sampling from a Single Image
- Authors: Bowen Hu, Baiying Lei, Shuqiang Wang
- Abstract summary: A novel model named stereoscopic-aware graph generative adversarial network (SG-GAN) is proposed to generate fine high-density brain point clouds.
The model shows superior performance in terms of visual quality, objective measurements, and performance in classification.
- Score: 18.30982492742905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In minimally-invasive brain surgeries with indirect and narrow operating
environments, 3D brain reconstruction is crucial. However, as requirements of
accuracy for some new minimally-invasive surgeries (such as brain-computer
interface surgery) are higher and higher, the outputs of conventional 3D
reconstruction, such as point cloud (PC), are facing the challenges that sample
points are too sparse and the precision is insufficient. On the other hand,
there is a scarcity of high-density point cloud datasets, which makes it
challenging to train models for direct reconstruction of high-density brain
point clouds. In this work, a novel model named stereoscopic-aware graph
generative adversarial network (SG-GAN) with two stages is proposed to generate
fine high-density PC conditioned on a single image. The Stage-I GAN sketches
the primitive shape and basic structure of the organ based on the given image,
yielding Stage-I point clouds. The Stage-II GAN takes the results from Stage-I
and generates high-density point clouds with detailed features. The Stage-II
GAN is capable of correcting defects and restoring the detailed features of the
region of interest (ROI) through the up-sampling process. Furthermore, a
parameter-free-attention-based free-transforming module is developed to learn
the efficient features of input, while upholding a promising performance.
Comparing with the existing methods, the SG-GAN model shows superior
performance in terms of visual quality, objective measurements, and performance
in classification, as demonstrated by comprehensive results measured by several
evaluation metrics including PC-to-PC error and Chamfer distance.
Related papers
- DM3D: Distortion-Minimized Weight Pruning for Lossless 3D Object Detection [42.07920565812081]
We propose a novel post-training weight pruning scheme for 3D object detection.
It determines redundant parameters in the pretrained model that lead to minimal distortion in both locality and confidence.
This framework aims to minimize detection distortion of network output to maximally maintain detection precision.
arXiv Detail & Related papers (2024-07-02T09:33:32Z) - Zero123-6D: Zero-shot Novel View Synthesis for RGB Category-level 6D Pose Estimation [66.3814684757376]
This work presents Zero123-6D, the first work to demonstrate the utility of Diffusion Model-based novel-view-synthesizers in enhancing RGB 6D pose estimation at category-level.
The outlined method shows reduction in data requirements, removal of the necessity of depth information in zero-shot category-level 6D pose estimation task, and increased performance, quantitatively demonstrated through experiments on the CO3D dataset.
arXiv Detail & Related papers (2024-03-21T10:38:18Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - Simple and Effective Synthesis of Indoor 3D Scenes [78.95697556834536]
We study the problem of immersive 3D indoor scenes from one or more images.
Our aim is to generate high-resolution images and videos from novel viewpoints.
We propose an image-to-image GAN that maps directly from reprojections of incomplete point clouds to full high-resolution RGB-D images.
arXiv Detail & Related papers (2022-04-06T17:54:46Z) - 3D Brain Reconstruction by Hierarchical Shape-Perception Network from a
Single Incomplete Image [20.133967825823312]
A novel hierarchical shape-perception network (HSPN) is proposed to reconstruct the 3D point clouds (PCs) of specific brains.
With the proposed HSPN, 3D shape perception and completion can be achieved spontaneously.
arXiv Detail & Related papers (2021-07-23T03:20:42Z) - A Point Cloud Generative Model via Tree-Structured Graph Convolutions
for 3D Brain Shape Reconstruction [31.436531681473753]
It is almost impossible to obtain the intraoperative 3D shape information by using physical methods such as sensor scanning.
In this paper, a general generative adversarial network (GAN) architecture is proposed to reconstruct the 3D point clouds (PCs) of brains by using one single 2D image.
arXiv Detail & Related papers (2021-07-21T07:57:37Z) - Hierarchical Amortized Training for Memory-efficient High Resolution 3D
GAN [52.851990439671475]
We propose a novel end-to-end GAN architecture that can generate high-resolution 3D images.
We achieve this goal by using different configurations between training and inference.
Experiments on 3D thorax CT and brain MRI demonstrate that our approach outperforms state of the art in image generation.
arXiv Detail & Related papers (2020-08-05T02:33:04Z) - Attention-Guided Version of 2D UNet for Automatic Brain Tumor
Segmentation [2.371982686172067]
Gliomas are the most common and aggressive among brain tumors, which cause a short life expectancy in their highest grade.
Deep convolutional neural networks (DCNNs) have achieved a remarkable performance in brain tumor segmentation.
However, this task is still difficult owing to high varying intensity and appearance of gliomas.
arXiv Detail & Related papers (2020-04-04T20:09:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.