Identifying Unnecessary 3D Gaussians using Clustering for Fast Rendering
of 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2402.13827v1
- Date: Wed, 21 Feb 2024 14:16:49 GMT
- Title: Identifying Unnecessary 3D Gaussians using Clustering for Fast Rendering
of 3D Gaussian Splatting
- Authors: Joongho Jo, Hyeongwon Kim, and Jongsun Park
- Abstract summary: 3D-GS is a new rendering approach that outperforms the neural radiance field (NeRF) in terms of both speed and image quality.
We propose a computational reduction technique that quickly identifies unnecessary 3D Gaussians in real-time for rendering the current view.
For the Mip-NeRF360 dataset, the proposed technique excludes 63% of 3D Gaussians on average before the 2D image projection, which reduces the overall rendering by almost 38.3% without sacrificing peak-signal-to-noise-ratio (PSNR)
The proposed accelerator also achieves a speedup of 10.7x compared to a GPU
- Score: 2.878831747437321
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian splatting (3D-GS) is a new rendering approach that outperforms
the neural radiance field (NeRF) in terms of both speed and image quality.
3D-GS represents 3D scenes by utilizing millions of 3D Gaussians and projects
these Gaussians onto the 2D image plane for rendering. However, during the
rendering process, a substantial number of unnecessary 3D Gaussians exist for
the current view direction, resulting in significant computation costs
associated with their identification. In this paper, we propose a computational
reduction technique that quickly identifies unnecessary 3D Gaussians in
real-time for rendering the current view without compromising image quality.
This is accomplished through the offline clustering of 3D Gaussians that are
close in distance, followed by the projection of these clusters onto a 2D image
plane during runtime. Additionally, we analyze the bottleneck associated with
the proposed technique when executed on GPUs and propose an efficient hardware
architecture that seamlessly supports the proposed scheme. For the Mip-NeRF360
dataset, the proposed technique excludes 63% of 3D Gaussians on average before
the 2D image projection, which reduces the overall rendering computation by
almost 38.3% without sacrificing peak-signal-to-noise-ratio (PSNR). The
proposed accelerator also achieves a speedup of 10.7x compared to a GPU.
Related papers
- Speedy-Splat: Fast 3D Gaussian Splatting with Sparse Pixels and Sparse Primitives [60.217580865237835]
3D Gaussian Splatting (3D-GS) is a recent 3D scene reconstruction technique that enables real-time rendering of novel views by modeling scenes as parametric point clouds of differentiable 3D Gaussians.
We identify and address two key inefficiencies in 3D-GS, achieving substantial improvements in rendering speed, model size, and training time.
Our Speedy-Splat approach combines these techniques to accelerate average rendering speed by a drastic $6.71times$ across scenes from the Mip-NeRF 360, Tanks & Temples, and Deep Blending datasets with $10.6times$ fewer primitives than 3
arXiv Detail & Related papers (2024-11-30T20:25:56Z) - 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes [87.01284850604495]
We introduce 3D Convexting (3DCS), which leverages 3D smooth convexes as primitives for modeling geometrically-meaningful radiance fields from multiview images.
3DCS achieves superior performance over 3DGS on benchmarks such as MipNeizer, Tanks and Temples, and Deep Blending.
Our results highlight the potential of 3D Convexting to become the new standard for high-quality scene reconstruction.
arXiv Detail & Related papers (2024-11-22T14:31:39Z) - PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting [59.277480452459315]
We propose a principled sensitivity pruning score that preserves visual fidelity and foreground details at significantly higher compression ratios.
We also propose a multi-round prune-refine pipeline that can be applied to any pretrained 3D-GS model without changing its training pipeline.
arXiv Detail & Related papers (2024-06-14T17:53:55Z) - GSGAN: Adversarial Learning for Hierarchical Generation of 3D Gaussian Splats [20.833116566243408]
In this paper, we exploit Gaussian as a 3D representation for 3D GANs by leveraging its efficient and explicit characteristics.
We introduce a generator architecture with a hierarchical multi-scale Gaussian representation that effectively regularizes the position and scale of generated Gaussians.
Experimental results demonstrate that ours achieves a significantly faster rendering speed (x100) compared to state-of-the-art 3D consistent GANs.
arXiv Detail & Related papers (2024-06-05T05:52:20Z) - 3D-HGS: 3D Half-Gaussian Splatting [5.766096863155448]
Photo-realistic 3D Reconstruction is a fundamental problem in 3D computer vision.
We propose to employ 3D Half-Gaussian (3D-HGS) kernels, which can be used as a plug-and-play kernel.
arXiv Detail & Related papers (2024-06-04T19:04:29Z) - F-3DGS: Factorized Coordinates and Representations for 3D Gaussian Splatting [13.653629893660218]
We propose Factorized 3D Gaussian Splatting (F-3DGS) as an alternative to neural radiance field (NeRF) rendering methods.
F-3DGS achieves a significant reduction in storage costs while maintaining comparable quality in rendered images.
arXiv Detail & Related papers (2024-05-27T11:55:49Z) - Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting [55.71424195454963]
Spec-Gaussian is an approach that utilizes an anisotropic spherical Gaussian appearance field instead of spherical harmonics.
Our experimental results demonstrate that our method surpasses existing approaches in terms of rendering quality.
This improvement extends the applicability of 3D GS to handle intricate scenarios with specular and anisotropic surfaces.
arXiv Detail & Related papers (2024-02-24T17:22:15Z) - AGG: Amortized Generative 3D Gaussians for Single Image to 3D [108.38567665695027]
We introduce an Amortized Generative 3D Gaussian framework (AGG) that instantly produces 3D Gaussians from a single image.
AGG decomposes the generation of 3D Gaussian locations and other appearance attributes for joint optimization.
We propose a cascaded pipeline that first generates a coarse representation of the 3D data and later upsamples it with a 3D Gaussian super-resolution module.
arXiv Detail & Related papers (2024-01-08T18:56:33Z) - Multi-Scale 3D Gaussian Splatting for Anti-Aliased Rendering [48.41629250718956]
3D Gaussians have recently emerged as a highly efficient representation for 3D reconstruction and rendering.
Despite its high rendering quality and speed at high resolutions, they both deteriorate drastically when rendered at lower resolutions or from far away camera position.
We propose a multi-scale 3D Gaussian splatting algorithm, which maintains Gaussians at different scales to represent the same scene.
Our algorithm can achieve 13%-66% PSNR and 160%-2400% rendering speed improvement at 4$times$-128$times$ scale rendering on Mip-NeRF360 dataset.
arXiv Detail & Related papers (2023-11-28T03:31:35Z) - Compact 3D Gaussian Representation for Radiance Field [14.729871192785696]
We propose a learnable mask strategy to reduce the number of 3D Gaussian points without sacrificing performance.
We also propose a compact but effective representation of view-dependent color by employing a grid-based neural field.
Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering.
arXiv Detail & Related papers (2023-11-22T20:31:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.