Robust Dynamic Radiance Fields
- URL: http://arxiv.org/abs/2301.02239v2
- Date: Tue, 21 Mar 2023 07:57:46 GMT
- Title: Robust Dynamic Radiance Fields
- Authors: Yu-Lun Liu, Chen Gao, Andreas Meuleman, Hung-Yu Tseng, Ayush Saraf,
Changil Kim, Yung-Yu Chuang, Johannes Kopf, Jia-Bin Huang
- Abstract summary: Dynamic radiance field reconstruction methods aim to model the time-varying structure and appearance of a dynamic scene.
Existing methods, however, assume that accurate camera poses can be reliably estimated by Structure from Motion (SfM) algorithms.
We address this robustness issue by jointly estimating the static and dynamic radiance fields along with the camera parameters.
- Score: 79.43526586134163
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic radiance field reconstruction methods aim to model the time-varying
structure and appearance of a dynamic scene. Existing methods, however, assume
that accurate camera poses can be reliably estimated by Structure from Motion
(SfM) algorithms. These methods, thus, are unreliable as SfM algorithms often
fail or produce erroneous poses on challenging videos with highly dynamic
objects, poorly textured surfaces, and rotating camera motion. We address this
robustness issue by jointly estimating the static and dynamic radiance fields
along with the camera parameters (poses and focal length). We demonstrate the
robustness of our approach via extensive quantitative and qualitative
experiments. Our results show favorable performance over the state-of-the-art
dynamic view synthesis methods.
Related papers
- Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video [18.424138608823267]
We propose DyBluRF, a dynamic radiance field approach that synthesizes sharp novel views from a monocular video affected by motion blur.
To account for motion blur in input images, we simultaneously capture the camera trajectory and object Discrete Cosine Transform (DCT) trajectories within the scene.
arXiv Detail & Related papers (2024-03-15T08:48:37Z) - SMURF: Continuous Dynamics for Motion-Deblurring Radiance Fields [14.681688453270523]
We propose sequential motion understanding radiance fields (SMURF), a novel approach that employs neural ordinary differential equation (Neural-ODE) to model continuous camera motion.
Our model, rigorously evaluated against benchmark datasets, demonstrates state-of-the-art performance both quantitatively and qualitatively.
arXiv Detail & Related papers (2024-03-12T11:32:57Z) - Diffusion Priors for Dynamic View Synthesis from Monocular Videos [59.42406064983643]
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos.
We first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique.
We distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields.
arXiv Detail & Related papers (2024-01-10T23:26:41Z) - Single-shot Tomography of Discrete Dynamic Objects [1.1407697960152927]
We present a novel method for the reconstruction of high-resolution temporal images in dynamic tomographic imaging.
The implications of this research extend to improved visualization and analysis of dynamic processes in tomographic imaging.
arXiv Detail & Related papers (2023-11-09T10:52:02Z) - DynaMoN: Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields [71.94156412354054]
We propose Dynamic Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields (DynaMoN)
DynaMoN handles dynamic content for initial camera pose estimation and statics-focused ray sampling for fast and accurate novel-view synthesis.
We extensively evaluate our approach on two real-world dynamic datasets, the TUM RGB-D dataset and the BONN RGB-D Dynamic dataset.
arXiv Detail & Related papers (2023-09-16T08:46:59Z) - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes [75.9110646062442]
We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner.
Our method takes multi-view RGB videos and background images from static cameras with known camera parameters as input.
We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.
arXiv Detail & Related papers (2023-08-16T09:50:35Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Image Reconstruction of Static and Dynamic Scenes through Anisoplanatic
Turbulence [1.6114012813668934]
We present a unified method for atmospheric turbulence mitigation in both static and dynamic sequences.
We are able to achieve better results compared to existing methods by utilizing a novel space-time non-local averaging method.
arXiv Detail & Related papers (2020-08-31T19:20:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.