Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene
Reconstruction
- URL: http://arxiv.org/abs/2309.13101v2
- Date: Sun, 19 Nov 2023 14:50:08 GMT
- Title: Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene
Reconstruction
- Authors: Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, Xiaogang
Jin
- Abstract summary: Implicit neural representation has paved the way for new approaches to dynamic scene reconstruction and rendering.
We propose a deformable 3D Gaussians Splatting method that reconstructs scenes using 3D Gaussians and learns them in canonical space.
Through a differential Gaussianizer, the deformable 3D Gaussians not only achieve higher rendering quality but also real-time rendering speed.
- Score: 29.83056271799794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural representation has paved the way for new approaches to
dynamic scene reconstruction and rendering. Nonetheless, cutting-edge dynamic
neural rendering methods rely heavily on these implicit representations, which
frequently struggle to capture the intricate details of objects in the scene.
Furthermore, implicit methods have difficulty achieving real-time rendering in
general dynamic scenes, limiting their use in a variety of tasks. To address
the issues, we propose a deformable 3D Gaussians Splatting method that
reconstructs scenes using 3D Gaussians and learns them in canonical space with
a deformation field to model monocular dynamic scenes. We also introduce an
annealing smoothing training mechanism with no extra overhead, which can
mitigate the impact of inaccurate poses on the smoothness of time interpolation
tasks in real-world datasets. Through a differential Gaussian rasterizer, the
deformable 3D Gaussians not only achieve higher rendering quality but also
real-time rendering speed. Experiments show that our method outperforms
existing methods significantly in terms of both rendering quality and speed,
making it well-suited for tasks such as novel-view synthesis, time
interpolation, and real-time rendering.
Related papers
- SpectroMotion: Dynamic 3D Reconstruction of Specular Scenes [7.590932716513324]
We present SpectroMotion, a novel approach that combines 3D Gaussian Splatting (3DGS) with physically-based rendering (PBR) and deformation fields to reconstruct dynamic specular scenes.
arXiv Detail & Related papers (2024-10-22T17:59:56Z) - Dynamic 3D Gaussian Fields for Urban Areas [60.64840836584623]
We present an efficient neural 3D scene representation for novel-view synthesis (NVS) in large-scale, dynamic urban areas.
We propose 4DGF, a neural scene representation that scales to large-scale dynamic urban areas.
arXiv Detail & Related papers (2024-06-05T12:07:39Z) - Gaussian Time Machine: A Real-Time Rendering Methodology for Time-Variant Appearances [10.614750331310804]
We present Gaussian Time Machine (GTM) which models the time-dependent attributes of Gaussian primitives with discrete time embedding vectors decoded by a lightweight Multi-Layer-Perceptron(MLP)
GTM achieved state-of-the-art rendering fidelity on 3 datasets and is 100 times faster than NeRF-based counterparts in rendering.
arXiv Detail & Related papers (2024-05-22T14:40:42Z) - BAD-Gaussians: Bundle Adjusted Deblur Gaussian Splatting [8.380954205255104]
BAD-Gaussians is a novel approach to handle severe motion-blurred images with inaccurate camera poses.
Our method achieves superior rendering quality compared to previous state-of-the-art deblur neural rendering methods.
arXiv Detail & Related papers (2024-03-18T14:43:04Z) - VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction [59.40711222096875]
We present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets.
arXiv Detail & Related papers (2024-02-27T11:40:50Z) - SWinGS: Sliding Windows for Dynamic 3D Gaussian Splatting [7.553079256251747]
We extend 3D Gaussian Splatting to reconstruct dynamic scenes.
We produce high-quality renderings of general dynamic scenes with competitive quantitative performance.
Our method can be viewed in real-time in our dynamic interactive viewer.
arXiv Detail & Related papers (2023-12-20T03:54:03Z) - SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes [59.23385953161328]
Novel view synthesis for dynamic scenes is still a challenging problem in computer vision and graphics.
We propose a new representation that explicitly decomposes the motion and appearance of dynamic scenes into sparse control points and dense Gaussians.
Our method can enable user-controlled motion editing while retaining high-fidelity appearances.
arXiv Detail & Related papers (2023-12-04T11:57:14Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.