Identifying Unnecessary 3D Gaussians using Clustering for Fast Rendering
of 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2402.13827v1
- Date: Wed, 21 Feb 2024 14:16:49 GMT
- Title: Identifying Unnecessary 3D Gaussians using Clustering for Fast Rendering
of 3D Gaussian Splatting
- Authors: Joongho Jo, Hyeongwon Kim, and Jongsun Park
- Abstract summary: 3D-GS is a new rendering approach that outperforms the neural radiance field (NeRF) in terms of both speed and image quality.
We propose a computational reduction technique that quickly identifies unnecessary 3D Gaussians in real-time for rendering the current view.
For the Mip-NeRF360 dataset, the proposed technique excludes 63% of 3D Gaussians on average before the 2D image projection, which reduces the overall rendering by almost 38.3% without sacrificing peak-signal-to-noise-ratio (PSNR)
The proposed accelerator also achieves a speedup of 10.7x compared to a GPU
- Score: 2.878831747437321
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian splatting (3D-GS) is a new rendering approach that outperforms
the neural radiance field (NeRF) in terms of both speed and image quality.
3D-GS represents 3D scenes by utilizing millions of 3D Gaussians and projects
these Gaussians onto the 2D image plane for rendering. However, during the
rendering process, a substantial number of unnecessary 3D Gaussians exist for
the current view direction, resulting in significant computation costs
associated with their identification. In this paper, we propose a computational
reduction technique that quickly identifies unnecessary 3D Gaussians in
real-time for rendering the current view without compromising image quality.
This is accomplished through the offline clustering of 3D Gaussians that are
close in distance, followed by the projection of these clusters onto a 2D image
plane during runtime. Additionally, we analyze the bottleneck associated with
the proposed technique when executed on GPUs and propose an efficient hardware
architecture that seamlessly supports the proposed scheme. For the Mip-NeRF360
dataset, the proposed technique excludes 63% of 3D Gaussians on average before
the 2D image projection, which reduces the overall rendering computation by
almost 38.3% without sacrificing peak-signal-to-noise-ratio (PSNR). The
proposed accelerator also achieves a speedup of 10.7x compared to a GPU.
Related papers
- PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting [59.277480452459315]
We propose a principled spatial sensitivity pruning score that outperforms current approaches.
We also propose a multi-round prune-refine pipeline that can be applied to any pretrained 3D-GS model.
Our pipeline increases the average rendering speed of 3D-GS by 2.65$times$ while retaining more salient foreground information.
arXiv Detail & Related papers (2024-06-14T17:53:55Z) - Adversarial Generation of Hierarchical Gaussians for 3D Generative Model [20.833116566243408]
In this paper, we exploit Gaussian as a 3D representation for 3D GANs by leveraging its efficient and explicit characteristics.
We introduce a generator architecture with a hierarchical multi-scale Gaussian representation that effectively regularizes the position and scale of generated Gaussians.
Experimental results demonstrate that ours achieves a significantly faster rendering speed (x100) compared to state-of-the-art 3D consistent GANs.
arXiv Detail & Related papers (2024-06-05T05:52:20Z) - R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction [53.19869886963333]
3D Gaussian splatting (3DGS) has shown promising results in rendering image and surface reconstruction.
This paper introduces R2$-Gaussian, the first 3DGS-based framework for sparse-view tomographic reconstruction.
arXiv Detail & Related papers (2024-05-31T08:39:02Z) - F-3DGS: Factorized Coordinates and Representations for 3D Gaussian Splatting [13.653629893660218]
We propose Factorized 3D Gaussian Splatting (F-3DGS) as an alternative to neural radiance field (NeRF) rendering methods.
F-3DGS achieves a significant reduction in storage costs while maintaining comparable quality in rendered images.
arXiv Detail & Related papers (2024-05-27T11:55:49Z) - Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting [55.71424195454963]
Spec-Gaussian is an approach that utilizes an anisotropic spherical Gaussian appearance field instead of spherical harmonics.
Our experimental results demonstrate that our method surpasses existing approaches in terms of rendering quality.
This improvement extends the applicability of 3D GS to handle intricate scenarios with specular and anisotropic surfaces.
arXiv Detail & Related papers (2024-02-24T17:22:15Z) - GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering [112.16239342037714]
GES (Generalized Exponential Splatting) is a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes.
With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks.
arXiv Detail & Related papers (2024-02-15T17:32:50Z) - AGG: Amortized Generative 3D Gaussians for Single Image to 3D [108.38567665695027]
We introduce an Amortized Generative 3D Gaussian framework (AGG) that instantly produces 3D Gaussians from a single image.
AGG decomposes the generation of 3D Gaussian locations and other appearance attributes for joint optimization.
We propose a cascaded pipeline that first generates a coarse representation of the 3D data and later upsamples it with a 3D Gaussian super-resolution module.
arXiv Detail & Related papers (2024-01-08T18:56:33Z) - Multi-Scale 3D Gaussian Splatting for Anti-Aliased Rendering [48.41629250718956]
3D Gaussians have recently emerged as a highly efficient representation for 3D reconstruction and rendering.
Despite its high rendering quality and speed at high resolutions, they both deteriorate drastically when rendered at lower resolutions or from far away camera position.
We propose a multi-scale 3D Gaussian splatting algorithm, which maintains Gaussians at different scales to represent the same scene.
Our algorithm can achieve 13%-66% PSNR and 160%-2400% rendering speed improvement at 4$times$-128$times$ scale rendering on Mip-NeRF360 dataset.
arXiv Detail & Related papers (2023-11-28T03:31:35Z) - Compact 3D Gaussian Representation for Radiance Field [14.729871192785696]
We propose a learnable mask strategy to reduce the number of 3D Gaussian points without sacrificing performance.
We also propose a compact but effective representation of view-dependent color by employing a grid-based neural field.
Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering.
arXiv Detail & Related papers (2023-11-22T20:31:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.