LapisGS: Layered Progressive 3D Gaussian Splatting for Adaptive Streaming
- URL: http://arxiv.org/abs/2408.14823v2
- Date: Mon, 10 Feb 2025 11:59:52 GMT
- Title: LapisGS: Layered Progressive 3D Gaussian Splatting for Adaptive Streaming
- Authors: Yuang Shi, Géraldine Morin, Simone Gasparini, Wei Tsang Ooi,
- Abstract summary: XR requires efficient streaming of 3D online worlds, challenging current 3DGS representations to adapt to bandwidth-constrained environments.<n>This paper proposes LapisGS, a layered 3DGS that supports adaptive streaming and progressive rendering.
- Score: 4.209963145038135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rise of Extended Reality (XR) requires efficient streaming of 3D online worlds, challenging current 3DGS representations to adapt to bandwidth-constrained environments. This paper proposes LapisGS, a layered 3DGS that supports adaptive streaming and progressive rendering. Our method constructs a layered structure for cumulative representation, incorporates dynamic opacity optimization to maintain visual fidelity, and utilizes occupancy maps to efficiently manage Gaussian splats. This proposed model offers a progressive representation supporting a continuous rendering quality adapted for bandwidth-aware streaming. Extensive experiments validate the effectiveness of our approach in balancing visual fidelity with the compactness of the model, with up to 50.71% improvement in SSIM, 286.53% improvement in LPIPS with 23% of the original model size, and shows its potential for bandwidth-adapted 3D streaming and rendering applications.
Related papers
- 3DGabSplat: 3D Gabor Splatting for Frequency-adaptive Radiance Field Rendering [50.04967868036964]
3D Gaussian Splatting (3DGS) has enabled real-time rendering while maintaining high-fidelity novel view synthesis.<n>We propose 3D Gabor Splatting (3DGabSplat) that incorporates a novel 3D Gabor-based primitive with multiple directional 3D frequency responses.<n>We achieve 1.35 dBR gain over 3D with simultaneously reduced number of primitive memory consumption.
arXiv Detail & Related papers (2025-08-07T12:49:44Z) - Duplex-GS: Proxy-Guided Weighted Blending for Real-Time Order-Independent Gaussian Splatting [37.17972426764452]
We propose a dual-hierarchy framework that integrates proxy Gaussian representations with order-independent rendering techniques.<n>By seamlessly combining our framework with Order-Independent Transparency (OIT), we develop a physically inspired weighted sum rendering technique that simultaneously eliminates "popping" and "transparency" artifacts.<n>Our results validate the advantages of the OIT rendering paradigm in Gaussian Splatting, achieving high-quality rendering with an impressive 1.5 to 4 speedup over existing OIT based Gaussian Splatting approaches.
arXiv Detail & Related papers (2025-08-05T07:44:30Z) - Enhanced Velocity Field Modeling for Gaussian Video Reconstruction [21.54297055995746]
High-fidelity 3D video reconstruction is essential for enabling real-time rendering of dynamic scenes with realistic motion in virtual and augmented reality (VR/AR)<n>We propose a flow-empowered velocity field modeling scheme tailored for Gaussian video reconstruction, dubbed FlowGaussian-VR.<n>It consists of two core components: a velocity field rendering (VFR) pipeline which enables optical flow-based optimization, and a flow-assisted adaptive densification (FAD) strategy that adjusts the number and size of Gaussians in dynamic regions.
arXiv Detail & Related papers (2025-07-31T16:26:22Z) - Adaptive 3D Gaussian Splatting Video Streaming: Visual Saliency-Aware Tiling and Meta-Learning-Based Bitrate Adaptation [9.779419462403144]
3D splatting video (3DGS) streaming has emerged as a research hotspot in both academia and industry.<n>We propose an adaptive 3DGS tiling technique guided by saliency analysis, which integrates both spatial and temporal features.<n>We also introduce a novel quality assessment framework for 3DGS video that jointly evaluates spatial-domain degradation in 3DGS representations during streaming and the quality of the resulting 2D rendered images.
arXiv Detail & Related papers (2025-07-19T03:00:36Z) - Adaptive 3D Gaussian Splatting Video Streaming [28.283254336752602]
We introduce an innovative framework for 3DGS volumetric video streaming.<n>By employing hybrid saliency tiling and differentiated quality modeling, we achieve efficient data compression and adaptation to bandwidth fluctuations.<n>Our method demonstrated superiority over existing approaches in various aspects, including video quality, compression effectiveness, and transmission rate.
arXiv Detail & Related papers (2025-07-19T01:45:24Z) - D-FCGS: Feedforward Compression of Dynamic Gaussian Splatting for Free-Viewpoint Videos [12.24209693552492]
Free-viewpoint video (FVV) enables immersive 3D experiences, but efficient compression of dynamic 3D representations remains a major challenge.<n>This paper presents Feedforward Compression of Dynamic Gaussian Splatting (D-FCGS), a novel feedforward framework for compressing temporally correlated Gaussian point cloud sequences.<n> Experiments show that it matches the rate-distortion performance of optimization-based methods, achieving over 40 times compression in under 2 seconds.
arXiv Detail & Related papers (2025-07-08T10:39:32Z) - EVolSplat: Efficient Volume-based Gaussian Splatting for Urban View Synthesis [61.1662426227688]
Existing NeRF and 3DGS-based methods show promising results in achieving photorealistic renderings but require slow, per-scene optimization.
We introduce EVolSplat, an efficient 3D Gaussian Splatting model for urban scenes that works in a feed-forward manner.
arXiv Detail & Related papers (2025-03-26T02:47:27Z) - ALOcc: Adaptive Lifting-based 3D Semantic Occupancy and Cost Volume-based Flow Prediction [89.89610257714006]
Existing methods prioritize higher accuracy to cater to the demands of these tasks.
We introduce a series of targeted improvements for 3D semantic occupancy prediction and flow estimation.
Our purelytemporalal architecture framework, named ALOcc, achieves an optimal tradeoff between speed and accuracy.
arXiv Detail & Related papers (2024-11-12T11:32:56Z) - HiCoM: Hierarchical Coherent Motion for Streamable Dynamic Scene with 3D Gaussian Splatting [7.507657419706855]
This paper proposes an efficient framework, dubbed HiCoM, with three key components.
First, we construct a compact and robust initial 3DGS representation using a perturbation smoothing strategy.
Next, we introduce a Hierarchical Coherent Motion mechanism that leverages the inherent non-uniform distribution and local consistency of 3D Gaussians.
Experiments conducted on two widely used datasets show that our framework improves learning efficiency of the state-of-the-art methods by about $20%$.
arXiv Detail & Related papers (2024-11-12T04:40:27Z) - L3DG: Latent 3D Gaussian Diffusion [74.36431175937285]
L3DG is the first approach for generative 3D modeling of 3D Gaussians through a latent 3D Gaussian diffusion formulation.
We employ a sparse convolutional architecture to efficiently operate on room-scale scenes.
By leveraging the 3D Gaussian representation, the generated scenes can be rendered from arbitrary viewpoints in real-time.
arXiv Detail & Related papers (2024-10-17T13:19:32Z) - Implicit Gaussian Splatting with Efficient Multi-Level Tri-Plane Representation [45.582869951581785]
Implicit Gaussian Splatting (IGS) is an innovative hybrid model that integrates explicit point clouds with implicit feature embeddings.
We introduce a level-based progressive training scheme, which incorporates explicit spatial regularization.
Our algorithm can deliver high-quality rendering using only a few MBs, effectively balancing storage efficiency and rendering fidelity.
arXiv Detail & Related papers (2024-08-19T14:34:17Z) - 3DGStream: On-the-Fly Training of 3D Gaussians for Efficient Streaming of Photo-Realistic Free-Viewpoint Videos [10.323643152957114]
3DGStream is a method designed for efficient FVV streaming of real-world dynamic scenes.
Our method achieves fast on-the-fly per-frame reconstruction within 12 seconds and real-time rendering at 200 FPS.
arXiv Detail & Related papers (2024-03-03T08:42:40Z) - RAVEN: Rethinking Adversarial Video Generation with Efficient Tri-plane Networks [93.18404922542702]
We present a novel video generative model designed to address long-term spatial and temporal dependencies.
Our approach incorporates a hybrid explicit-implicit tri-plane representation inspired by 3D-aware generative frameworks.
Our model synthesizes high-fidelity video clips at a resolution of $256times256$ pixels, with durations extending to more than $5$ seconds at a frame rate of 30 fps.
arXiv Detail & Related papers (2024-01-11T16:48:44Z) - HiFi4G: High-Fidelity Human Performance Rendering via Compact Gaussian
Splatting [48.59338619051709]
HiFi4G is an explicit and compact Gaussian-based approach for high-fidelity human performance rendering from dense footage.
It achieves a substantial compression rate of approximately 25 times, with less than 2MB of storage per frame.
arXiv Detail & Related papers (2023-12-06T12:36:53Z) - DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation [55.661467968178066]
We propose DreamGaussian, a novel 3D content generation framework that achieves both efficiency and quality simultaneously.
Our key insight is to design a generative 3D Gaussian Splatting model with companioned mesh extraction and texture refinement in UV space.
In contrast to the occupancy pruning used in Neural Radiance Fields, we demonstrate that the progressive densification of 3D Gaussians converges significantly faster for 3D generative tasks.
arXiv Detail & Related papers (2023-09-28T17:55:05Z) - StraIT: Non-autoregressive Generation with Stratified Image Transformer [63.158996766036736]
Stratified Image Transformer(StraIT) is a pure non-autoregressive(NAR) generative model.
Our experiments demonstrate that StraIT significantly improves NAR generation and out-performs existing DMs and AR methods.
arXiv Detail & Related papers (2023-03-01T18:59:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.