Efficient Gaussian Splatting for Monocular Dynamic Scene Rendering via Sparse Time-Variant Attribute Modeling
- URL: http://arxiv.org/abs/2502.20378v1
- Date: Thu, 27 Feb 2025 18:53:06 GMT
- Title: Efficient Gaussian Splatting for Monocular Dynamic Scene Rendering via Sparse Time-Variant Attribute Modeling
- Authors: Hanyang Kong, Xingyi Yang, Xinchao Wang,
- Abstract summary: Deformable Gaussian Splatting has emerged as a robust solution to represent real-world dynamic scenes.<n>Our approach formulates dynamic scenes using a sparse anchor-grid representation, with the motion flow of dense Gaussians calculated via a classical kernel representation.<n>Experiments on two real-world datasets demonstrate that our EDGS significantly improves the rendering speed with superior rendering quality.
- Score: 64.84686527988809
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rendering dynamic scenes from monocular videos is a crucial yet challenging task. The recent deformable Gaussian Splatting has emerged as a robust solution to represent real-world dynamic scenes. However, it often leads to heavily redundant Gaussians, attempting to fit every training view at various time steps, leading to slower rendering speeds. Additionally, the attributes of Gaussians in static areas are time-invariant, making it unnecessary to model every Gaussian, which can cause jittering in static regions. In practice, the primary bottleneck in rendering speed for dynamic scenes is the number of Gaussians. In response, we introduce Efficient Dynamic Gaussian Splatting (EDGS), which represents dynamic scenes via sparse time-variant attribute modeling. Our approach formulates dynamic scenes using a sparse anchor-grid representation, with the motion flow of dense Gaussians calculated via a classical kernel representation. Furthermore, we propose an unsupervised strategy to efficiently filter out anchors corresponding to static areas. Only anchors associated with deformable objects are input into MLPs to query time-variant attributes. Experiments on two real-world datasets demonstrate that our EDGS significantly improves the rendering speed with superior rendering quality compared to previous state-of-the-art methods.
Related papers
- Divide-and-Conquer: Dual-Hierarchical Optimization for Semantic 4D Gaussian Spatting [16.15871890842964]
We propose Dual-Hierarchical Optimization (DHO), which consists of Hierarchical Gaussian Flow and Hierarchical Gaussian Guidance.
Our method consistently outperforms the baselines on both synthetic and real-world datasets.
arXiv Detail & Related papers (2025-03-25T03:46:13Z) - 4D Gaussian Splatting with Scale-aware Residual Field and Adaptive Optimization for Real-time Rendering of Temporally Complex Dynamic Scenes [19.24815625343669]
SaRO-GS is a novel dynamic scene representation capable of achieving real-time rendering.<n>To handle temporally complex dynamic scenes, we introduce a Scale-aware Residual Field.<n>Our method has demonstrated state-of-the-art performance.
arXiv Detail & Related papers (2024-12-09T08:44:19Z) - UrbanGS: Semantic-Guided Gaussian Splatting for Urban Scene Reconstruction [86.4386398262018]
UrbanGS uses 2D semantic maps and an existing dynamic Gaussian approach to distinguish static objects from the scene.
For potentially dynamic objects, we aggregate temporal information using learnable time embeddings.
Our approach outperforms state-of-the-art methods in reconstruction quality and efficiency.
arXiv Detail & Related papers (2024-12-04T16:59:49Z) - DeSiRe-GS: 4D Street Gaussians for Static-Dynamic Decomposition and Surface Reconstruction for Urban Driving Scenes [71.61083731844282]
We present DeSiRe-GS, a self-supervised gaussian splatting representation.
It enables effective static-dynamic decomposition and high-fidelity surface reconstruction in complex driving scenarios.
arXiv Detail & Related papers (2024-11-18T05:49:16Z) - Fully Explicit Dynamic Gaussian Splatting [22.889981393105554]
3D Gaussian Splatting has shown fast and high-quality rendering results in static scenes by leveraging dense 3D prior and explicit representations.
We introduce a progressive training scheme and a point-backtracking technique that improves Ex4DGS's convergence.
Comprehensive experiments on various scenes demonstrate the state-of-the-art rendering quality from our method, achieving fast rendering of 62 fps on a single 2080Ti GPU.
arXiv Detail & Related papers (2024-10-21T04:25:43Z) - Dynamic Gaussian Marbles for Novel View Synthesis of Casual Monocular Videos [58.22272760132996]
We show that existing 4D Gaussian methods dramatically fail in this setup because the monocular setting is underconstrained.
We propose Dynamic Gaussian Marbles, which consist of three core modifications that target the difficulties of the monocular setting.
We evaluate on the Nvidia Dynamic Scenes dataset and the DyCheck iPhone dataset, and show that Gaussian Marbles significantly outperforms other Gaussian baselines in quality.
arXiv Detail & Related papers (2024-06-26T19:37:07Z) - Superpoint Gaussian Splatting for Real-Time High-Fidelity Dynamic Scene Reconstruction [10.208558194785017]
We propose a novel framework named Superpoint Gaussian Splatting (SP-GS)
Our framework first reconstructs the scene and then clusters Gaussians with similar properties into superpoints.
Empowered by these superpoints, our method manages to extend 3D Gaussian splatting to dynamic scenes with only a slight increase in computational expense.
arXiv Detail & Related papers (2024-06-06T02:32:41Z) - Dynamic 3D Gaussian Fields for Urban Areas [60.64840836584623]
We present an efficient neural 3D scene representation for novel-view synthesis (NVS) in large-scale, dynamic urban areas.
We propose 4DGF, a neural scene representation that scales to large-scale dynamic urban areas.
arXiv Detail & Related papers (2024-06-05T12:07:39Z) - MVSGaussian: Fast Generalizable Gaussian Splatting Reconstruction from Multi-View Stereo [54.00987996368157]
We present MVSGaussian, a new generalizable 3D Gaussian representation approach derived from Multi-View Stereo (MVS)
MVSGaussian achieves real-time rendering with better synthesis quality for each scene.
arXiv Detail & Related papers (2024-05-20T17:59:30Z) - GauFRe: Gaussian Deformation Fields for Real-time Dynamic Novel View Synthesis [16.733855781461802]
Implicit deformable representations commonly model motion with a canonical space and time-dependent deformation field.<n>GauFRe, uses a forward-warping deformation to explicitly model non-rigid transformations of scene geometry.<n>Experiments show our method achieves competitive results and higher efficiency than previous state-of-the-art NeRF and Gaussian-based methods.
arXiv Detail & Related papers (2023-12-18T18:59:03Z) - Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene
Reconstruction [29.83056271799794]
Implicit neural representation has paved the way for new approaches to dynamic scene reconstruction and rendering.
We propose a deformable 3D Gaussians Splatting method that reconstructs scenes using 3D Gaussians and learns them in canonical space.
Through a differential Gaussianizer, the deformable 3D Gaussians not only achieve higher rendering quality but also real-time rendering speed.
arXiv Detail & Related papers (2023-09-22T16:04:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.