Compressed 3D Gaussian Splatting for Accelerated Novel View Synthesis
- URL: http://arxiv.org/abs/2401.02436v2
- Date: Mon, 22 Jan 2024 10:08:28 GMT
- Title: Compressed 3D Gaussian Splatting for Accelerated Novel View Synthesis
- Authors: Simon Niedermayr, Josef Stumpfegger, R\"udiger Westermann
- Abstract summary: High-fidelity scene reconstruction with an optimized 3D Gaussian splat representation has been introduced for novel view synthesis from sparse image sets.
We propose a compressed 3D Gaussian splat representation that utilizes sensitivity-aware vector clustering with quantization-aware training to compress directional colors and Gaussian parameters.
- Score: 0.552480439325792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, high-fidelity scene reconstruction with an optimized 3D Gaussian
splat representation has been introduced for novel view synthesis from sparse
image sets. Making such representations suitable for applications like network
streaming and rendering on low-power devices requires significantly reduced
memory consumption as well as improved rendering efficiency. We propose a
compressed 3D Gaussian splat representation that utilizes sensitivity-aware
vector clustering with quantization-aware training to compress directional
colors and Gaussian parameters. The learned codebooks have low bitrates and
achieve a compression rate of up to $31\times$ on real-world scenes with only
minimal degradation of visual quality. We demonstrate that the compressed splat
representation can be efficiently rendered with hardware rasterization on
lightweight GPUs at up to $4\times$ higher framerates than reported via an
optimized GPU compute pipeline. Extensive experiments across multiple datasets
demonstrate the robustness and rendering speed of the proposed approach.
Related papers
- 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes [87.01284850604495]
We introduce 3D Convexting (3DCS), which leverages 3D smooth convexes as primitives for modeling geometrically-meaningful radiance fields from multiview images.
3DCS achieves superior performance over 3DGS on benchmarks such as MipNeizer, Tanks and Temples, and Deep Blending.
Our results highlight the potential of 3D Convexting to become the new standard for high-quality scene reconstruction.
arXiv Detail & Related papers (2024-11-22T14:31:39Z) - Compact 3D Gaussian Splatting for Static and Dynamic Radiance Fields [13.729716867839509]
We propose a learnable mask strategy that significantly reduces the number of Gaussians while preserving high performance.
In addition, we propose a compact but effective representation of view-dependent color by employing a grid-based neural field.
Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering.
arXiv Detail & Related papers (2024-08-07T14:56:34Z) - 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes [50.36933474990516]
This work considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance ray tracing hardware.
To efficiently handle large numbers of semi-transparent particles, we describe a specialized algorithm which encapsulates particles with bounding meshes.
Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision.
arXiv Detail & Related papers (2024-07-09T17:59:30Z) - Feature Splatting for Better Novel View Synthesis with Low Overlap [9.192660643226372]
3D Splatting has emerged as a very promising scene representation, achieving state-of-the-art quality in novel view synthesis.
We propose to encode the color information of 3D Gaussians into per-Gaussian feature vectors, which we denote as Feature Splatting.
arXiv Detail & Related papers (2024-05-24T13:02:29Z) - CompGS: Efficient 3D Scene Representation via Compressed Gaussian Splatting [68.94594215660473]
We propose an efficient 3D scene representation, named Compressed Gaussian Splatting (CompGS)
We exploit a small set of anchor primitives for prediction, allowing the majority of primitives to be encapsulated into highly compact residual forms.
Experimental results show that the proposed CompGS significantly outperforms existing methods, achieving superior compactness in 3D scene representation without compromising model accuracy and rendering quality.
arXiv Detail & Related papers (2024-04-15T04:50:39Z) - GaussianImage: 1000 FPS Image Representation and Compression by 2D Gaussian Splatting [27.33121386538575]
Implicit neural representations (INRs) recently achieved great success in image representation and compression.
However, this requirement often hinders their use on low-end devices with limited memory.
We propose a groundbreaking paradigm of image representation and compression by 2D Gaussian Splatting, named GaussianImage.
arXiv Detail & Related papers (2024-03-13T14:02:54Z) - GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering [112.16239342037714]
GES (Generalized Exponential Splatting) is a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes.
With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks.
arXiv Detail & Related papers (2024-02-15T17:32:50Z) - Splatter Image: Ultra-Fast Single-View 3D Reconstruction [67.96212093828179]
Splatter Image is based on Gaussian Splatting, which allows fast and high-quality reconstruction of 3D scenes from multiple images.
We learn a neural network that, at test time, performs reconstruction in a feed-forward manner, at 38 FPS.
On several synthetic, real, multi-category and large-scale benchmark datasets, we achieve better results in terms of PSNR, LPIPS, and other metrics while training and evaluating much faster than prior works.
arXiv Detail & Related papers (2023-12-20T16:14:58Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z) - Compact 3D Gaussian Representation for Radiance Field [14.729871192785696]
We propose a learnable mask strategy to reduce the number of 3D Gaussian points without sacrificing performance.
We also propose a compact but effective representation of view-dependent color by employing a grid-based neural field.
Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering.
arXiv Detail & Related papers (2023-11-22T20:31:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.