DDGS-CT: Direction-Disentangled Gaussian Splatting for Realistic Volume Rendering
- URL: http://arxiv.org/abs/2406.02518v1
- Date: Tue, 4 Jun 2024 17:39:31 GMT
- Title: DDGS-CT: Direction-Disentangled Gaussian Splatting for Realistic Volume Rendering
- Authors: Zhongpai Gao, Benjamin Planche, Meng Zheng, Xiao Chen, Terrence Chen, Ziyan Wu,
- Abstract summary: Digitally reconstructed radiographs (DRRs) are simulated 2D X-ray images generated from 3D CT volumes.
We present a novel approach that marries realistic physics-inspired X-ray simulation with efficient, differentiable DRR generation.
- Score: 30.30749508345767
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digitally reconstructed radiographs (DRRs) are simulated 2D X-ray images generated from 3D CT volumes, widely used in preoperative settings but limited in intraoperative applications due to computational bottlenecks, especially for accurate but heavy physics-based Monte Carlo methods. While analytical DRR renderers offer greater efficiency, they overlook anisotropic X-ray image formation phenomena, such as Compton scattering. We present a novel approach that marries realistic physics-inspired X-ray simulation with efficient, differentiable DRR generation using 3D Gaussian splatting (3DGS). Our direction-disentangled 3DGS (DDGS) method separates the radiosity contribution into isotropic and direction-dependent components, approximating complex anisotropic interactions without intricate runtime simulations. Additionally, we adapt the 3DGS initialization to account for tomography data properties, enhancing accuracy and efficiency. Our method outperforms state-of-the-art techniques in image accuracy. Furthermore, our DDGS shows promise for intraoperative applications and inverse problems such as pose registration, delivering superior registration accuracy and runtime performance compared to analytical DRR methods.
Related papers
- Gaussian Representation for Deformable Image Registration [12.226244219255197]
We introduce a novel DIR approach employing parametric 3D Gaussian control points.
It provides an explicit and flexible representation for spatial fields between 3D medical images.
We validated our approach on the 4D-CT lung DIR-Lab and cardiac ACDC datasets, achieving an average target registration error (TRE) of 1.06 mm within a much-improved processing time of 2.43 seconds.
arXiv Detail & Related papers (2024-06-05T15:44:54Z) - R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction [53.19869886963333]
3D Gaussian splatting (3DGS) has shown promising results in rendering image and surface reconstruction.
This paper introduces R2-Gaussian, the first 3DGS-based framework for sparse-view tomographic reconstruction.
arXiv Detail & Related papers (2024-05-31T08:39:02Z) - Gaussian Opacity Fields: Efficient and Compact Surface Reconstruction in Unbounded Scenes [50.92217884840301]
Gaussian Opacity Fields (GOF) is a novel approach for efficient, high-quality, and compact surface reconstruction in scenes.
GOF is derived from ray-tracing-based volume rendering of 3D Gaussians.
GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-04-16T17:57:19Z) - End-to-End Rate-Distortion Optimized 3D Gaussian Representation [33.20840558425759]
We formulate the compact 3D Gaussian learning as an end-to-end Rate-Distortion Optimization problem.
We introduce dynamic pruning and entropy-constrained vector quantization (ECVQ) that optimize the rate and distortion at the same time.
We verify our method on both real and synthetic scenes, showcasing that RDO-Gaussian greatly reduces the size of 3D Gaussian over 40x.
arXiv Detail & Related papers (2024-04-09T14:37:54Z) - 2D Gaussian Splatting for Geometrically Accurate Radiance Fields [50.056790168812114]
3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction, achieving high quality novel view synthesis and fast rendering speed without baking.
We present 2D Gaussian Splatting (2DGS), a novel approach to model and reconstruct geometrically accurate radiance fields from multi-view images.
We demonstrate that our differentiable terms allows for noise-free and detailed geometry reconstruction while maintaining competitive appearance quality, fast training speed, and real-time rendering.
arXiv Detail & Related papers (2024-03-26T17:21:24Z) - Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian
Splatting [57.80942520483354]
3D-GS frequently encounters difficulties in accurately modeling specular and anisotropic components.
We introduce Spec-Gaussian, an approach that utilizes an anisotropic spherical Gaussian appearance field instead of spherical harmonics.
Our experimental results demonstrate that our method surpasses existing approaches in terms of rendering quality.
arXiv Detail & Related papers (2024-02-24T17:22:15Z) - GaussianPro: 3D Gaussian Splatting with Progressive Propagation [49.918797726059545]
3DGS relies heavily on the point cloud produced by Structure-from-Motion (SfM) techniques.
We propose a novel method that applies a progressive propagation strategy to guide the densification of the 3D Gaussians.
Our method significantly surpasses 3DGS on the dataset, exhibiting an improvement of 1.15dB in terms of PSNR.
arXiv Detail & Related papers (2024-02-22T16:00:20Z) - Plug-and-Play Regularization on Magnitude with Deep Priors for 3D Near-Field MIMO Imaging [0.0]
Near-field radar imaging systems are used in a wide range of applications such as concealed weapon detection and medical diagnosis.
We consider the problem of the three-dimensional (3D) complex-valued reflectivity by enforcing regularization on its magnitude.
arXiv Detail & Related papers (2023-12-26T12:25:09Z) - Sparse-view CT Reconstruction with 3D Gaussian Volumetric Representation [13.667470059238607]
Sparse-view CT is a promising strategy for reducing the radiation dose of traditional CT scans.
Recently, 3D Gaussian has been applied to model complex natural scenes.
We investigate their potential for sparse-view CT reconstruction.
arXiv Detail & Related papers (2023-12-25T09:47:33Z) - GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization [76.52007427483396]
GIR is a 3D Gaussian Inverse Rendering method for relightable scene factorization.
Our method utilizes 3D Gaussians to estimate the material properties, illumination, and geometry of an object from multi-view images.
arXiv Detail & Related papers (2023-12-08T16:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.