D-MiSo: Editing Dynamic 3D Scenes using Multi-Gaussians Soup
- URL: http://arxiv.org/abs/2405.14276v2
- Date: Fri, 24 May 2024 12:46:19 GMT
- Title: D-MiSo: Editing Dynamic 3D Scenes using Multi-Gaussians Soup
- Authors: Joanna Waczyńska, Piotr Borycki, Joanna Kaleta, Sławomir Tadeja, Przemysław Spurek,
- Abstract summary: We propose Dynamic Multi-Gaussian Soup (D-MiSo), which allows us to model the mesh-inspired representation of dynamic GS.
We also propose a strategy of linking parameterized Gaussian splats, forming a Triangle Soup with the estimated mesh.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the past years, we have observed an abundance of approaches for modeling dynamic 3D scenes using Gaussian Splatting (GS). Such solutions use GS to represent the scene's structure and the neural network to model dynamics. Such approaches allow fast rendering and extracting each element of such a dynamic scene. However, modifying such objects over time is challenging. SC-GS (Sparse Controlled Gaussian Splatting) enhanced with Deformed Control Points partially solves this issue. However, this approach necessitates selecting elements that need to be kept fixed, as well as centroids that should be adjusted throughout editing. Moreover, this task poses additional difficulties regarding the re-productivity of such editing. To address this, we propose Dynamic Multi-Gaussian Soup (D-MiSo), which allows us to model the mesh-inspired representation of dynamic GS. Additionally, we propose a strategy of linking parameterized Gaussian splats, forming a Triangle Soup with the estimated mesh. Consequently, we can separately construct new trajectories for the 3D objects composing the scene. Thus, we can make the scene's dynamic editable over time or while maintaining partial dynamics.
Related papers
- RigGS: Rigging of 3D Gaussians for Modeling Articulated Objects in Videos [50.37136267234771]
RigGS is a new paradigm that leverages 3D Gaussian representation and skeleton-based motion representation to model dynamic objects.
Our method can generate realistic new actions easily for objects and achieve high-quality rendering.
arXiv Detail & Related papers (2025-03-21T03:27:07Z) - REdiSplats: Ray Tracing for Editable Gaussian Splatting [0.0]
We introduce REdiSplats, which employs ray tracing and a mesh-based representation of flat 3D Gaussians.
In practice, we model the scene using flat Gaussian distributions parameterized by the mesh.
We can render our models using 3D tools such as Blender or Nvdiffrast, which opens the possibility of integrating them with all existing 3D graphics techniques.
arXiv Detail & Related papers (2025-03-15T22:42:21Z) - Efficient Gaussian Splatting for Monocular Dynamic Scene Rendering via Sparse Time-Variant Attribute Modeling [64.84686527988809]
Deformable Gaussian Splatting has emerged as a robust solution to represent real-world dynamic scenes.
Our approach formulates dynamic scenes using a sparse anchor-grid representation, with the motion flow of dense Gaussians calculated via a classical kernel representation.
Experiments on two real-world datasets demonstrate that our EDGS significantly improves the rendering speed with superior rendering quality.
arXiv Detail & Related papers (2025-02-27T18:53:06Z) - UrbanGS: Semantic-Guided Gaussian Splatting for Urban Scene Reconstruction [86.4386398262018]
UrbanGS uses 2D semantic maps and an existing dynamic Gaussian approach to distinguish static objects from the scene.
For potentially dynamic objects, we aggregate temporal information using learnable time embeddings.
Our approach outperforms state-of-the-art methods in reconstruction quality and efficiency.
arXiv Detail & Related papers (2024-12-04T16:59:49Z) - Per-Gaussian Embedding-Based Deformation for Deformable 3D Gaussian Splatting [9.90835990611019]
3D Gaussian Splatting (3DGS) provides fast and high-quality novel view synthesis.
It is a natural extension to deform a canonical 3DGS to multiple frames for representing a dynamic scene.
Previous works fail to accurately reconstruct complex dynamic scenes.
arXiv Detail & Related papers (2024-04-04T17:34:41Z) - Bridging 3D Gaussian and Mesh for Freeview Video Rendering [57.21847030980905]
GauMesh bridges the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.
We show that our approach adapts the appropriate type of primitives to represent the different parts of the dynamic scene.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - GaussianStyle: Gaussian Head Avatar via StyleGAN [64.85782838199427]
We propose a novel framework that integrates the volumetric strengths of 3DGS with the powerful implicit representation of StyleGAN.
We show that our method achieves state-of-the-art performance in reenactment, novel view synthesis, and animation.
arXiv Detail & Related papers (2024-02-01T18:14:42Z) - SWinGS: Sliding Windows for Dynamic 3D Gaussian Splatting [7.553079256251747]
We extend 3D Gaussian Splatting to reconstruct dynamic scenes.
We produce high-quality renderings of general dynamic scenes with competitive quantitative performance.
Our method can be viewed in real-time in our dynamic interactive viewer.
arXiv Detail & Related papers (2023-12-20T03:54:03Z) - DrivingGaussian: Composite Gaussian Splatting for Surrounding Dynamic Autonomous Driving Scenes [57.12439406121721]
We present DrivingGaussian, an efficient and effective framework for surrounding dynamic autonomous driving scenes.
For complex scenes with moving objects, we first sequentially and progressively model the static background of the entire scene.
We then leverage a composite dynamic Gaussian graph to handle multiple moving objects.
We further use a LiDAR prior for Gaussian Splatting to reconstruct scenes with greater details and maintain panoramic consistency.
arXiv Detail & Related papers (2023-12-13T06:30:51Z) - CoGS: Controllable Gaussian Splatting [5.909271640907126]
Controllable Gaussian Splatting (CoGS) is a new method for capturing and re-animating 3D structures.
CoGS offers real-time control of dynamic scenes without the prerequisite of pre-computing control signals.
In our evaluations, CoGS consistently outperformed existing dynamic and controllable neural representations in terms of visual fidelity.
arXiv Detail & Related papers (2023-12-09T20:06:29Z) - SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes [59.23385953161328]
Novel view synthesis for dynamic scenes is still a challenging problem in computer vision and graphics.
We propose a new representation that explicitly decomposes the motion and appearance of dynamic scenes into sparse control points and dense Gaussians.
Our method can enable user-controlled motion editing while retaining high-fidelity appearances.
arXiv Detail & Related papers (2023-12-04T11:57:14Z) - Gaussian Grouping: Segment and Edit Anything in 3D Scenes [65.49196142146292]
We propose Gaussian Grouping, which extends Gaussian Splatting to jointly reconstruct and segment anything in open-world 3D scenes.
Compared to the implicit NeRF representation, we show that the grouped 3D Gaussians can reconstruct, segment and edit anything in 3D with high visual quality, fine granularity and efficiency.
arXiv Detail & Related papers (2023-12-01T17:09:31Z) - Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis [58.5779956899918]
We present a method that simultaneously addresses the tasks of dynamic scene novel-view synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements.
We follow an analysis-by-synthesis framework, inspired by recent work that models scenes as a collection of 3D Gaussians.
We demonstrate a large number of downstream applications enabled by our representation, including first-person view synthesis, dynamic compositional scene synthesis, and 4D video editing.
arXiv Detail & Related papers (2023-08-18T17:59:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.