Sparse-view CT Reconstruction with 3D Gaussian Volumetric Representation
- URL: http://arxiv.org/abs/2312.15676v1
- Date: Mon, 25 Dec 2023 09:47:33 GMT
- Title: Sparse-view CT Reconstruction with 3D Gaussian Volumetric Representation
- Authors: Yingtai Li, Xueming Fu, Shang Zhao, Ruiyang Jin, S. Kevin Zhou
- Abstract summary: Sparse-view CT is a promising strategy for reducing the radiation dose of traditional CT scans.
Recently, 3D Gaussian has been applied to model complex natural scenes.
We investigate their potential for sparse-view CT reconstruction.
- Score: 13.667470059238607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse-view CT is a promising strategy for reducing the radiation dose of
traditional CT scans, but reconstructing high-quality images from incomplete
and noisy data is challenging. Recently, 3D Gaussian has been applied to model
complex natural scenes, demonstrating fast convergence and better rendering of
novel views compared to implicit neural representations (INRs). Taking
inspiration from the successful application of 3D Gaussians in natural scene
modeling and novel view synthesis, we investigate their potential for
sparse-view CT reconstruction. We leverage prior information from the
filtered-backprojection reconstructed image to initialize the Gaussians; and
update their parameters via comparing difference in the projection space.
Performance is further enhanced by adaptive density control. Compared to INRs,
3D Gaussians benefit more from prior information to explicitly bypass learning
in void spaces and allocate the capacity efficiently, accelerating convergence.
3D Gaussians also efficiently learn high-frequency details. Trained in a
self-supervised manner, 3D Gaussians avoid the need for large-scale paired
data. Our experiments on the AAPM-Mayo dataset demonstrate that 3D Gaussians
can provide superior performance compared to INR-based methods. This work is in
progress, and the code will be publicly available.
Related papers
- 4DRGS: 4D Radiative Gaussian Splatting for Efficient 3D Vessel Reconstruction from Sparse-View Dynamic DSA Images [49.170407434313475]
Existing methods often produce suboptimal results or require excessive computation time.
We propose 4D radiative Gaussian splatting (4DRGS) to achieve high-quality reconstruction efficiently.
4DRGS achieves impressive results in 5 minutes training, which is 32x faster than the state-of-the-art method.
arXiv Detail & Related papers (2024-12-17T13:51:56Z) - Discretized Gaussian Representation for Tomographic Reconstruction [20.390232991700977]
We propose a novel Discretized Gaussian Representation (DGR) for Computed Tomography (CT) reconstruction.
DGR directly reconstructs the 3D volume using a set of discretized Gaussian functions in an end-to-end manner.
Our experiments on both real-world and synthetic datasets demonstrate that DGR achieves superior reconstruction quality and significantly improved computational efficiency.
arXiv Detail & Related papers (2024-11-07T16:32:29Z) - 3DGR-CAR: Coronary artery reconstruction from ultra-sparse 2D X-ray views with a 3D Gaussians representation [13.829610843207746]
Reconstructing 3D coronary arteries is important for coronary artery disease diagnosis, treatment planning and operation navigation.
Traditional reconstruction techniques often require many projections, while reconstruction from sparse-view X-ray projections is a potential way of reducing radiation dose.
We propose 3DGR-CAR, a 3D Gaussian Representation for Coronary Artery Reconstruction from ultra-sparse X-ray projections.
arXiv Detail & Related papers (2024-10-01T05:00:47Z) - 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - Learning 3D Gaussians for Extremely Sparse-View Cone-Beam CT Reconstruction [9.848266253196307]
Cone-Beam Computed Tomography (CBCT) is an indispensable technique in medical imaging, yet the associated radiation exposure raises concerns in clinical practice.
We propose a novel reconstruction framework, namely DIF-Gaussian, which leverages 3D Gaussians to represent the feature distribution in the 3D space.
We evaluate DIF-Gaussian on two public datasets, showing significantly superior reconstruction performance than previous state-of-the-art methods.
arXiv Detail & Related papers (2024-07-01T08:48:04Z) - CoCPF: Coordinate-based Continuous Projection Field for Ill-Posed Inverse Problem in Imaging [78.734927709231]
Sparse-view computed tomography (SVCT) reconstruction aims to acquire CT images based on sparsely-sampled measurements.
Due to ill-posedness, implicit neural representation (INR) techniques may leave considerable holes'' (i.e., unmodeled spaces) in their fields, leading to sub-optimal results.
We propose the Coordinate-based Continuous Projection Field (CoCPF), which aims to build hole-free representation fields for SVCT reconstruction.
arXiv Detail & Related papers (2024-06-21T08:38:30Z) - Effective Rank Analysis and Regularization for Enhanced 3D Gaussian Splatting [33.01987451251659]
3D Gaussian Splatting (3DGS) has emerged as a promising technique capable of real-time rendering with high-quality 3D reconstruction.
Despite its potential, 3DGS encounters challenges, including needle-like artifacts, suboptimal geometries, and inaccurate normals.
We introduce effective rank as a regularization, which constrains the structure of the Gaussians.
arXiv Detail & Related papers (2024-06-17T15:51:59Z) - PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting [59.277480452459315]
We propose a principled spatial sensitivity pruning score that outperforms current approaches.
We also propose a multi-round prune-refine pipeline that can be applied to any pretrained 3D-GS model.
Our pipeline increases the average rendering speed of 3D-GS by 2.65$times$ while retaining more salient foreground information.
arXiv Detail & Related papers (2024-06-14T17:53:55Z) - RaDe-GS: Rasterizing Depth in Gaussian Splatting [32.38730602146176]
Gaussian Splatting (GS) has proven to be highly effective in novel view synthesis, achieving high-quality and real-time rendering.
Our work introduces a Chamfer distance error comparable to NeuraLangelo on the DTU dataset and maintains similar computational efficiency as the original 3D GS methods.
arXiv Detail & Related papers (2024-06-03T15:56:58Z) - R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction [53.19869886963333]
3D Gaussian splatting (3DGS) has shown promising results in rendering image and surface reconstruction.
This paper introduces R2$-Gaussian, the first 3DGS-based framework for sparse-view tomographic reconstruction.
arXiv Detail & Related papers (2024-05-31T08:39:02Z) - Gaussian Opacity Fields: Efficient Adaptive Surface Reconstruction in Unbounded Scenes [50.92217884840301]
Gaussian Opacity Fields (GOF) is a novel approach for efficient, high-quality, and adaptive surface reconstruction in scenes.
GOF is derived from ray-tracing-based volume rendering of 3D Gaussians.
GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-04-16T17:57:19Z) - AbsGS: Recovering Fine Details for 3D Gaussian Splatting [10.458776364195796]
3D Gaussian Splatting (3D-GS) technique couples 3D primitives with differentiable Gaussianization to achieve high-quality novel view results.
However, 3D-GS frequently suffers from over-reconstruction issue in intricate scenes containing high-frequency details, leading to blurry rendered images.
We present a comprehensive analysis of the cause of aforementioned artifacts, namely gradient collision.
Our strategy efficiently identifies large Gaussians in over-reconstructed regions, and recovers fine details by splitting.
arXiv Detail & Related papers (2024-04-16T11:44:12Z) - GaSpCT: Gaussian Splatting for Novel CT Projection View Synthesis [0.6990493129893112]
GaSpCT is a novel view synthesis and 3D scene representation method used to generate novel projection views for Computer Tomography (CT) scans.
We adapt the Gaussian Splatting framework to enable novel view synthesis in CT based on limited sets of 2D image projections.
We evaluate the performance of our model using brain CT scans from the Parkinson's Progression Markers Initiative (PPMI) dataset.
arXiv Detail & Related papers (2024-04-04T00:28:50Z) - CoherentGS: Sparse Novel View Synthesis with Coherent 3D Gaussians [18.42203035154126]
We introduce a structured Gaussian representation that can be controlled in 2D image space.
We then constraint the Gaussians, in particular their position, and prevent them from moving independently during optimization.
We demonstrate significant improvements compared to the state-of-the-art sparse-view NeRF-based approaches on a variety of scenes.
arXiv Detail & Related papers (2024-03-28T15:27:13Z) - GaussianPro: 3D Gaussian Splatting with Progressive Propagation [49.918797726059545]
3DGS relies heavily on the point cloud produced by Structure-from-Motion (SfM) techniques.
We propose a novel method that applies a progressive propagation strategy to guide the densification of the 3D Gaussians.
Our method significantly surpasses 3DGS on the dataset, exhibiting an improvement of 1.15dB in terms of PSNR.
arXiv Detail & Related papers (2024-02-22T16:00:20Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z) - GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting [51.96353586773191]
We introduce textbfGS-SLAM that first utilizes 3D Gaussian representation in the Simultaneous Localization and Mapping system.
Our method utilizes a real-time differentiable splatting rendering pipeline that offers significant speedup to map optimization and RGB-D rendering.
Our method achieves competitive performance compared with existing state-of-the-art real-time methods on the Replica, TUM-RGBD datasets.
arXiv Detail & Related papers (2023-11-20T12:08:23Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.