DynMF: Neural Motion Factorization for Real-time Dynamic View Synthesis
with 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2312.00112v1
- Date: Thu, 30 Nov 2023 18:59:11 GMT
- Title: DynMF: Neural Motion Factorization for Real-time Dynamic View Synthesis
with 3D Gaussian Splatting
- Authors: Agelos Kratimenos and Jiahui Lei and Kostas Daniilidis
- Abstract summary: We argue that the per-point motions of a dynamic scene can be decomposed into a small set of explicit or learned trajectories.
Our representation is interpretable, efficient, and expressive enough to offer real-time view synthesis of complex dynamic scene motions.
- Score: 35.69069478773709
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurately and efficiently modeling dynamic scenes and motions is considered
so challenging a task due to temporal dynamics and motion complexity. To
address these challenges, we propose DynMF, a compact and efficient
representation that decomposes a dynamic scene into a few neural trajectories.
We argue that the per-point motions of a dynamic scene can be decomposed into a
small set of explicit or learned trajectories. Our carefully designed neural
framework consisting of a tiny set of learned basis queried only in time allows
for rendering speed similar to 3D Gaussian Splatting, surpassing 120 FPS, while
at the same time, requiring only double the storage compared to static scenes.
Our neural representation adequately constrains the inherently underconstrained
motion field of a dynamic scene leading to effective and fast optimization.
This is done by biding each point to motion coefficients that enforce the
per-point sharing of basis trajectories. By carefully applying a sparsity loss
to the motion coefficients, we are able to disentangle the motions that
comprise the scene, independently control them, and generate novel motion
combinations that have never been seen before. We can reach state-of-the-art
render quality within just 5 minutes of training and in less than half an hour,
we can synthesize novel views of dynamic scenes with superior photorealistic
quality. Our representation is interpretable, efficient, and expressive enough
to offer real-time view synthesis of complex dynamic scene motions, in
monocular and multi-view scenarios.
Related papers
- Shape of Motion: 4D Reconstruction from a Single Video [51.04575075620677]
We introduce a method capable of reconstructing generic dynamic scenes, featuring explicit, full-sequence-long 3D motion.
We exploit the low-dimensional structure of 3D motion by representing scene motion with a compact set of SE3 motion bases.
Our method achieves state-of-the-art performance for both long-range 3D/2D motion estimation and novel view synthesis on dynamic scenes.
arXiv Detail & Related papers (2024-07-18T17:59:08Z) - DEMOS: Dynamic Environment Motion Synthesis in 3D Scenes via Local
Spherical-BEV Perception [54.02566476357383]
We propose the first Dynamic Environment MOtion Synthesis framework (DEMOS) to predict future motion instantly according to the current scene.
We then use it to dynamically update the latent motion for final motion synthesis.
The results show our method outperforms previous works significantly and has great performance in handling dynamic environments.
arXiv Detail & Related papers (2024-03-04T05:38:16Z) - Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis [58.5779956899918]
We present a method that simultaneously addresses the tasks of dynamic scene novel-view synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements.
We follow an analysis-by-synthesis framework, inspired by recent work that models scenes as a collection of 3D Gaussians.
We demonstrate a large number of downstream applications enabled by our representation, including first-person view synthesis, dynamic compositional scene synthesis, and 4D video editing.
arXiv Detail & Related papers (2023-08-18T17:59:21Z) - OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields [63.04781030984006]
Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive results in novel view synthesis on 3D dynamic scenes.
We propose OD-NeRF to efficiently train and render dynamic NeRFs on-the-fly which instead is capable of streaming the dynamic scene.
Our algorithm can achieve an interactive speed of 6FPS training and rendering on synthetic dynamic scenes on-the-fly, and a significant speed-up compared to the state-of-the-art on real-world dynamic scenes.
arXiv Detail & Related papers (2023-05-24T07:36:47Z) - MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary
Monocular Cameras [98.40768911788854]
We introduce MoCo-Flow, a representation that models the dynamic scene using a 4D continuous time-variant function.
At the heart of our work lies a novel optimization formulation, which is constrained by a motion consensus regularization on the motion flow.
We extensively evaluate MoCo-Flow on several datasets that contain human motions of varying complexity.
arXiv Detail & Related papers (2021-06-08T16:03:50Z) - Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes [70.76742458931935]
We introduce a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion.
Our representation is optimized through a neural network to fit the observed input views.
We show that our representation can be used for complex dynamic scenes, including thin structures, view-dependent effects, and natural degrees of motion.
arXiv Detail & Related papers (2020-11-26T01:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.