L3GS: Layered 3D Gaussian Splats for Efficient 3D Scene Delivery
- URL: http://arxiv.org/abs/2504.05517v1
- Date: Mon, 07 Apr 2025 21:23:32 GMT
- Title: L3GS: Layered 3D Gaussian Splats for Efficient 3D Scene Delivery
- Authors: Yi-Zhen Tsai, Xuechen Zhang, Zheng Li, Jiasi Chen,
- Abstract summary: 3D Gaussian splats (3DGS) can meet the best of both worlds, with high visual quality and efficient rendering for real-time frame rates.<n>The goal of this work is to create an efficient 3D content delivery framework that allows users to view high quality 3D scenes with 3DGS as the underlying data representation.
- Score: 10.900680596540377
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional 3D content representations include dense point clouds that consume large amounts of data and hence network bandwidth, while newer representations such as neural radiance fields suffer from poor frame rates due to their non-standard volumetric rendering pipeline. 3D Gaussian splats (3DGS) can be seen as a generalization of point clouds that meet the best of both worlds, with high visual quality and efficient rendering for real-time frame rates. However, delivering 3DGS scenes from a hosting server to client devices is still challenging due to high network data consumption (e.g., 1.5 GB for a single scene). The goal of this work is to create an efficient 3D content delivery framework that allows users to view high quality 3D scenes with 3DGS as the underlying data representation. The main contributions of the paper are: (1) Creating new layered 3DGS scenes for efficient delivery, (2) Scheduling algorithms to choose what splats to download at what time, and (3) Trace-driven experiments from users wearing virtual reality headsets to evaluate the visual quality and latency. Our system for Layered 3D Gaussian Splats delivery L3GS demonstrates high visual quality, achieving 16.9% higher average SSIM compared to baselines, and also works with other compressed 3DGS representations.
Related papers
- EVolSplat: Efficient Volume-based Gaussian Splatting for Urban View Synthesis [61.1662426227688]
Existing NeRF and 3DGS-based methods show promising results in achieving photorealistic renderings but require slow, per-scene optimization.<n>We introduce EVolSplat, an efficient 3D Gaussian Splatting model for urban scenes that works in a feed-forward manner.
arXiv Detail & Related papers (2025-03-26T02:47:27Z) - S3R-GS: Streamlining the Pipeline for Large-Scale Street Scene Reconstruction [58.37746062258149]
3D Gaussian Splatting (3DGS) has reshaped the field of 3D reconstruction, achieving impressive rendering quality and speed.
Existing methods suffer from rapidly escalating per-viewpoint reconstruction costs as scene size increases.
We propose S3R-GS, a 3DGS framework that Streamlines the pipeline for large-scale Street Scene Reconstruction.
arXiv Detail & Related papers (2025-03-11T09:37:13Z) - UniGS: Unified Language-Image-3D Pretraining with Gaussian Splatting [68.37013525040891]
We propose UniGS, integrating 3D Gaussian Splatting (3DGS) into multi-modal pre-training to enhance the 3D representation.<n>We demonstrate the effectiveness of UniGS in learning a more general and stronger aligned multi-modal representation.
arXiv Detail & Related papers (2025-02-25T05:10:22Z) - EGSRAL: An Enhanced 3D Gaussian Splatting based Renderer with Automated Labeling for Large-Scale Driving Scene [19.20846992699852]
We propose EGSRAL, a 3D GS-based method that relies solely on training images without extra annotations.<n>EGSRAL enhances 3D GS's capability to model both dynamic objects and static backgrounds.<n>We also propose a grouping strategy for vanilla 3D GS to address perspective issues in rendering large-scale, complex scenes.
arXiv Detail & Related papers (2024-12-20T04:21:54Z) - 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes [87.01284850604495]
We introduce 3D Convexting (3DCS), which leverages 3D smooth convexes as primitives for modeling geometrically-meaningful radiance fields from multiview images.
3DCS achieves superior performance over 3DGS on benchmarks such as MipNeizer, Tanks and Temples, and Deep Blending.
Our results highlight the potential of 3D Convexting to become the new standard for high-quality scene reconstruction.
arXiv Detail & Related papers (2024-11-22T14:31:39Z) - 3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors [13.191199172286508]
Novel-view synthesis aims to generate novel views of a scene from multiple input images or videos.
3DGS-Enhancer is a novel pipeline for enhancing the representation quality of 3DGS representations.
arXiv Detail & Related papers (2024-10-21T17:59:09Z) - Compact 3D Gaussian Splatting for Static and Dynamic Radiance Fields [13.729716867839509]
We propose a learnable mask strategy that significantly reduces the number of Gaussians while preserving high performance.
In addition, we propose a compact but effective representation of view-dependent color by employing a grid-based neural field.
Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering.
arXiv Detail & Related papers (2024-08-07T14:56:34Z) - WildGaussians: 3D Gaussian Splatting in the Wild [80.5209105383932]
We introduce WildGaussians, a novel approach to handle occlusions and appearance changes with 3DGS.
We demonstrate that WildGaussians matches the real-time rendering speed of 3DGS while surpassing both 3DGS and NeRF baselines in handling in-the-wild data.
arXiv Detail & Related papers (2024-07-11T12:41:32Z) - LP-3DGS: Learning to Prune 3D Gaussian Splatting [71.97762528812187]
We propose learning-to-prune 3DGS, where a trainable binary mask is applied to the importance score that can find optimal pruning ratio automatically.
Experiments have shown that LP-3DGS consistently produces a good balance that is both efficient and high quality.
arXiv Detail & Related papers (2024-05-29T05:58:34Z) - Gaussian Splatting Decoder for 3D-aware Generative Adversarial Networks [10.207899254360374]
NeRF-based 3D-aware Generative Adversarial Networks (GANs) have shown very high rendering quality under large representational variety.
rendering with Neural Radiance Fields poses challenges for 3D applications.
We present a novel approach that combines the high rendering quality of NeRF-based 3D-aware GANs with the flexibility and computational advantages of 3DGS.
arXiv Detail & Related papers (2024-04-16T14:48:40Z) - Compact 3D Gaussian Representation for Radiance Field [14.729871192785696]
We propose a learnable mask strategy to reduce the number of 3D Gaussian points without sacrificing performance.
We also propose a compact but effective representation of view-dependent color by employing a grid-based neural field.
Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering.
arXiv Detail & Related papers (2023-11-22T20:31:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.