DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video
- URL: http://arxiv.org/abs/2403.10103v2
- Date: Tue, 19 Mar 2024 08:56:44 GMT
- Title: DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video
- Authors: Huiqiang Sun, Xingyi Li, Liao Shen, Xinyi Ye, Ke Xian, Zhiguo Cao,
- Abstract summary: We propose DyBluRF, a dynamic radiance field approach that synthesizes sharp novel views from a monocular video affected by motion blur.
To account for motion blur in input images, we simultaneously capture the camera trajectory and object Discrete Cosine Transform (DCT) trajectories within the scene.
- Score: 18.424138608823267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in dynamic neural radiance field methods have yielded remarkable outcomes. However, these approaches rely on the assumption of sharp input images. When faced with motion blur, existing dynamic NeRF methods often struggle to generate high-quality novel views. In this paper, we propose DyBluRF, a dynamic radiance field approach that synthesizes sharp novel views from a monocular video affected by motion blur. To account for motion blur in input images, we simultaneously capture the camera trajectory and object Discrete Cosine Transform (DCT) trajectories within the scene. Additionally, we employ a global cross-time rendering approach to ensure consistent temporal coherence across the entire scene. We curate a dataset comprising diverse dynamic scenes that are specifically tailored for our task. Experimental results on our dataset demonstrate that our method outperforms existing approaches in generating sharp novel views from motion-blurred inputs while maintaining spatial-temporal consistency of the scene.
Related papers
- Dynamic Neural Radiance Field From Defocused Monocular Video [15.789775912053507]
We propose D2RF, the first dynamic NeRF method designed to restore sharp novel views from defocused monocular videos.
We introduce layered Depth-of-Field (DoF) volume rendering to model the defocus blur and reconstruct a sharp NeRF supervised by defocused views.
Our method outperforms existing approaches in synthesizing all-in-focus novel views from defocus blur while maintaining spatial-temporal consistency in the scene.
arXiv Detail & Related papers (2024-07-08T03:46:56Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - OmniLocalRF: Omnidirectional Local Radiance Fields from Dynamic Videos [14.965321452764355]
We introduce a new approach called Omnidirectional Local Radiance Fields (OmniLocalRF) that can render static-only scene views.
Our approach combines the principles of local radiance fields with the bidirectional optimization of omnidirectional rays.
Our experiments validate that OmniLocalRF outperforms existing methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2024-03-31T12:55:05Z) - Diffusion Priors for Dynamic View Synthesis from Monocular Videos [59.42406064983643]
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos.
We first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique.
We distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields.
arXiv Detail & Related papers (2024-01-10T23:26:41Z) - CTNeRF: Cross-Time Transformer for Dynamic Neural Radiance Field from Monocular Video [25.551944406980297]
We propose a novel approach to generate high-quality novel views from monocular videos of complex and dynamic scenes.
We introduce a module that operates in both the time and frequency domains to aggregate the features of object motion.
Our experiments demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2024-01-10T00:40:05Z) - Forward Flow for Novel View Synthesis of Dynamic Scenes [97.97012116793964]
We propose a neural radiance field (NeRF) approach for novel view synthesis of dynamic scenes using forward warping.
Our method outperforms existing methods in both novel view rendering and motion modeling.
arXiv Detail & Related papers (2023-09-29T16:51:06Z) - Alignment-free HDR Deghosting with Semantics Consistent Transformer [76.91669741684173]
High dynamic range imaging aims to retrieve information from multiple low-dynamic range inputs to generate realistic output.
Existing methods often focus on the spatial misalignment across input frames caused by the foreground and/or camera motion.
We propose a novel alignment-free network with a Semantics Consistent Transformer (SCTNet) with both spatial and channel attention modules.
arXiv Detail & Related papers (2023-05-29T15:03:23Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes [70.76742458931935]
We introduce a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion.
Our representation is optimized through a neural network to fit the observed input views.
We show that our representation can be used for complex dynamic scenes, including thin structures, view-dependent effects, and natural degrees of motion.
arXiv Detail & Related papers (2020-11-26T01:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.