SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes
- URL: http://arxiv.org/abs/2312.14937v2
- Date: Fri, 29 Dec 2023 12:43:49 GMT
- Title: SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes
- Authors: Yi-Hua Huang and Yang-Tian Sun and Ziyi Yang and Xiaoyang Lyu and
Yan-Pei Cao and Xiaojuan Qi
- Abstract summary: Novel view synthesis for dynamic scenes is still a challenging problem in computer vision and graphics.
We propose a new representation that explicitly decomposes the motion and appearance of dynamic scenes into sparse control points and dense Gaussians.
Our method can enable user-controlled motion editing while retaining high-fidelity appearances.
- Score: 59.23385953161328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Novel view synthesis for dynamic scenes is still a challenging problem in
computer vision and graphics. Recently, Gaussian splatting has emerged as a
robust technique to represent static scenes and enable high-quality and
real-time novel view synthesis. Building upon this technique, we propose a new
representation that explicitly decomposes the motion and appearance of dynamic
scenes into sparse control points and dense Gaussians, respectively. Our key
idea is to use sparse control points, significantly fewer in number than the
Gaussians, to learn compact 6 DoF transformation bases, which can be locally
interpolated through learned interpolation weights to yield the motion field of
3D Gaussians. We employ a deformation MLP to predict time-varying 6 DoF
transformations for each control point, which reduces learning complexities,
enhances learning abilities, and facilitates obtaining temporal and spatial
coherent motion patterns. Then, we jointly learn the 3D Gaussians, the
canonical space locations of control points, and the deformation MLP to
reconstruct the appearance, geometry, and dynamics of 3D scenes. During
learning, the location and number of control points are adaptively adjusted to
accommodate varying motion complexities in different regions, and an ARAP loss
following the principle of as rigid as possible is developed to enforce spatial
continuity and local rigidity of learned motions. Finally, thanks to the
explicit sparse motion representation and its decomposition from appearance,
our method can enable user-controlled motion editing while retaining
high-fidelity appearances. Extensive experiments demonstrate that our approach
outperforms existing approaches on novel view synthesis with a high rendering
speed and enables novel appearance-preserved motion editing applications.
Project page: https://yihua7.github.io/SC-GS-web/
Related papers
- Gaussian Splatting LK [0.11249583407496218]
This paper investigates the potential of regularizing the native warp field within the dynamic Gaussian Splatting framework.
We show that we can exploit knowledge innate to the forward warp field network to derive an analytical velocity field.
This derived Lucas-Kanade style analytical regularization enables our method to achieve superior performance in reconstructing highly dynamic scenes.
arXiv Detail & Related papers (2024-07-16T01:50:43Z) - InfoGaussian: Structure-Aware Dynamic Gaussians through Lightweight Information Shaping [9.703830846219441]
We develop a technique that enforces movement resonance between correlated Gaussians in a motion network.
We develop an efficient contrastive training pipeline with lightweight optimization to shape the motion network.
The proposed technique is evaluated on challenging scenes and demonstrates significant performance improvement.
arXiv Detail & Related papers (2024-06-09T19:33:32Z) - Dynamic 3D Gaussian Fields for Urban Areas [60.64840836584623]
We present an efficient neural 3D scene representation for novel-view synthesis (NVS) in large-scale, dynamic urban areas.
We propose 4DGF, a neural scene representation that scales to large-scale dynamic urban areas.
arXiv Detail & Related papers (2024-06-05T12:07:39Z) - HUGS: Holistic Urban 3D Scene Understanding via Gaussian Splatting [53.6394928681237]
holistic understanding of urban scenes based on RGB images is a challenging yet important problem.
Our main idea involves the joint optimization of geometry, appearance, semantics, and motion using a combination of static and dynamic 3D Gaussians.
Our approach offers the ability to render new viewpoints in real-time, yielding 2D and 3D semantic information with high accuracy.
arXiv Detail & Related papers (2024-03-19T13:39:05Z) - Motion-aware 3D Gaussian Splatting for Efficient Dynamic Scene Reconstruction [89.53963284958037]
We propose a novel motion-aware enhancement framework for dynamic scene reconstruction.
Specifically, we first establish a correspondence between 3D Gaussian movements and pixel-level flow.
For the prevalent deformation-based paradigm that presents a harder optimization problem, a transient-aware deformation auxiliary module is proposed.
arXiv Detail & Related papers (2024-03-18T03:46:26Z) - SWinGS: Sliding Windows for Dynamic 3D Gaussian Splatting [7.553079256251747]
We extend 3D Gaussian Splatting to reconstruct dynamic scenes.
We produce high-quality renderings of general dynamic scenes with competitive quantitative performance.
Our method can be viewed in real-time in our dynamic interactive viewer.
arXiv Detail & Related papers (2023-12-20T03:54:03Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z) - Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene
Reconstruction [29.83056271799794]
Implicit neural representation has paved the way for new approaches to dynamic scene reconstruction and rendering.
We propose a deformable 3D Gaussians Splatting method that reconstructs scenes using 3D Gaussians and learns them in canonical space.
Through a differential Gaussianizer, the deformable 3D Gaussians not only achieve higher rendering quality but also real-time rendering speed.
arXiv Detail & Related papers (2023-09-22T16:04:02Z) - Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis [58.5779956899918]
We present a method that simultaneously addresses the tasks of dynamic scene novel-view synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements.
We follow an analysis-by-synthesis framework, inspired by recent work that models scenes as a collection of 3D Gaussians.
We demonstrate a large number of downstream applications enabled by our representation, including first-person view synthesis, dynamic compositional scene synthesis, and 4D video editing.
arXiv Detail & Related papers (2023-08-18T17:59:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.