DRSM: efficient neural 4d decomposition for dynamic reconstruction in
stationary monocular cameras
- URL: http://arxiv.org/abs/2402.00740v1
- Date: Thu, 1 Feb 2024 16:38:51 GMT
- Title: DRSM: efficient neural 4d decomposition for dynamic reconstruction in
stationary monocular cameras
- Authors: Weixing Xie, Xiao Dong, Yong Yang, Qiqin Lin, Jingze Chen, Junfeng
Yao, Xiaohu Guo
- Abstract summary: We present a novel framework to tackle 4D decomposition problem for dynamic scenes in monocular cameras.
Our framework utilizes decomposed static and dynamic feature planes to represent 4D scenes and emphasizes the learning of dynamic regions through dense ray casting.
- Score: 21.07910546072467
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the popularity of monocular videos generated by video sharing and live
broadcasting applications, reconstructing and editing dynamic scenes in
stationary monocular cameras has become a special but anticipated technology.
In contrast to scene reconstructions that exploit multi-view observations, the
problem of modeling a dynamic scene from a single view is significantly more
under-constrained and ill-posed. Inspired by recent progress in neural
rendering, we present a novel framework to tackle 4D decomposition problem for
dynamic scenes in monocular cameras. Our framework utilizes decomposed static
and dynamic feature planes to represent 4D scenes and emphasizes the learning
of dynamic regions through dense ray casting. Inadequate 3D clues from a
single-view and occlusion are also particular challenges in scene
reconstruction. To overcome these difficulties, we propose deep supervised
optimization and ray casting strategies. With experiments on various videos,
our method generates higher-fidelity results than existing methods for
single-view dynamic scene representation.
Related papers
- 4D Gaussian Splatting: Modeling Dynamic Scenes with Native 4D Primitives [116.2042238179433]
In this paper, we frame dynamic scenes as unconstrained 4D volume learning problems.
We represent a target dynamic scene using a collection of 4D Gaussian primitives with explicit geometry and appearance features.
This approach can capture relevant information in space and time by fitting the underlying photorealistic-temporal volume.
Notably, our 4DGS model is the first solution that supports real-time rendering of high-resolution, novel views for complex dynamic scenes.
arXiv Detail & Related papers (2024-12-30T05:30:26Z) - Dyn-HaMR: Recovering 4D Interacting Hand Motion from a Dynamic Camera [49.82535393220003]
Dyn-HaMR is the first approach to reconstruct 4D global hand motion from monocular videos recorded by dynamic cameras in the wild.
We show that our approach significantly outperforms state-of-the-art methods in terms of 4D global mesh recovery.
This establishes a new benchmark for hand motion reconstruction from monocular video with moving cameras.
arXiv Detail & Related papers (2024-12-17T12:43:10Z) - 4D Gaussian Splatting in the Wild with Uncertainty-Aware Regularization [43.81271239333774]
We propose a novel 4D Gaussian Splatting (4DGS) algorithm for dynamic scenes from casually recorded monocular videos.
Our experiments show that the proposed method improves the performance of 4DGS reconstruction from a video captured by a handheld monocular camera.
arXiv Detail & Related papers (2024-11-13T18:56:39Z) - Shape of Motion: 4D Reconstruction from a Single Video [51.04575075620677]
We introduce a method capable of reconstructing generic dynamic scenes, featuring explicit, full-sequence-long 3D motion.
We exploit the low-dimensional structure of 3D motion by representing scene motion with a compact set of SE3 motion bases.
Our method achieves state-of-the-art performance for both long-range 3D/2D motion estimation and novel view synthesis on dynamic scenes.
arXiv Detail & Related papers (2024-07-18T17:59:08Z) - Modeling Ambient Scene Dynamics for Free-view Synthesis [31.233859111566613]
We introduce a novel method for dynamic free-view synthesis of an ambient scenes from a monocular capture.
Our method builds upon the recent advancements in 3D Gaussian Splatting (3DGS) that can faithfully reconstruct complex static scenes.
arXiv Detail & Related papers (2024-06-13T17:59:11Z) - Diffusion Priors for Dynamic View Synthesis from Monocular Videos [59.42406064983643]
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos.
We first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique.
We distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields.
arXiv Detail & Related papers (2024-01-10T23:26:41Z) - Decoupling Dynamic Monocular Videos for Dynamic View Synthesis [50.93409250217699]
We tackle the challenge of dynamic view synthesis from dynamic monocular videos in an unsupervised fashion.
Specifically, we decouple the motion of the dynamic objects into object motion and camera motion, respectively regularized by proposed unsupervised surface consistency and patch-based multi-view constraints.
arXiv Detail & Related papers (2023-04-04T11:25:44Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.