SMURF: Continuous Dynamics for Motion-Deblurring Radiance Fields
- URL: http://arxiv.org/abs/2403.07547v2
- Date: Thu, 15 May 2025 08:57:01 GMT
- Title: SMURF: Continuous Dynamics for Motion-Deblurring Radiance Fields
- Authors: Jungho Lee, Dogyoon Lee, Minhyeok Lee, Donghyung Kim, Sangyoun Lee,
- Abstract summary: The presence of motion blur, resulting from slight camera movements during extended shutter exposures, poses a significant challenge.<n>We propose sequential motion understanding radiance fields (SMURF), a novel approach that models continuous camera motion.<n>Our model is evaluated against benchmark datasets and demonstrates state-of-the-art performance both quantitatively and qualitatively.
- Score: 13.684805723485157
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance fields (NeRF) has attracted considerable attention for their exceptional ability in synthesizing novel views with high fidelity. However, the presence of motion blur, resulting from slight camera movements during extended shutter exposures, poses a significant challenge, potentially compromising the quality of the reconstructed 3D scenes. To effectively handle this issue, we propose sequential motion understanding radiance fields (SMURF), a novel approach that models continuous camera motion and leverages the explicit volumetric representation method for robustness to motion-blurred input images. The core idea of the SMURF is continuous motion blurring kernel (CMBK), a module designed to model a continuous camera movements for processing blurry inputs. Our model is evaluated against benchmark datasets and demonstrates state-of-the-art performance both quantitatively and qualitatively.
Related papers
- FreeDriveRF: Monocular RGB Dynamic NeRF without Poses for Autonomous Driving via Point-Level Dynamic-Static Decoupling [13.495102292705253]
FreeDriveRF reconstructs dynamic driving scenes using only sequential RGB images without requiring poses inputs.<n>We introduce a warped ray-guided dynamic object rendering consistency loss, utilizing optical flow to better constrain the dynamic modeling process.
arXiv Detail & Related papers (2025-05-14T14:02:49Z) - MoBGS: Motion Deblurring Dynamic 3D Gaussian Splatting for Blurry Monocular Video [26.468480933928458]
MoBGS reconstructs sharp and high-quality views from blurry monocular videos in an end-to-end manner.
We introduce a Blur-adaptive Latent Camera Estimation (BLCE) method for effective latent camera trajectory estimation.
We also propose a physically-inspired Latent Camera-inspired Exposure Estimation (LCEE) method to ensure consistent deblurring of both global camera and local object motion.
arXiv Detail & Related papers (2025-04-21T14:19:19Z) - LightMotion: A Light and Tuning-free Method for Simulating Camera Motion in Video Generation [56.64004196498026]
LightMotion is a light and tuning-free method for simulating camera motion in video generation.
operating in the latent space, it eliminates additional fine-tuning, inpainting, and depth estimation.
arXiv Detail & Related papers (2025-03-09T08:28:40Z) - CoMoGaussian: Continuous Motion-Aware Gaussian Splatting from Motion-Blurred Images [19.08403715388913]
A critical issue is the camera motion blur caused by movement during exposure, which hinders accurate 3D scene reconstruction.
We propose CoMoGaussian, a Continuous Motion-Aware Gaussian Splatting that reconstructs precise 3D scenes from motion-red images.
arXiv Detail & Related papers (2025-03-07T11:18:43Z) - Dyn-HaMR: Recovering 4D Interacting Hand Motion from a Dynamic Camera [49.82535393220003]
Dyn-HaMR is the first approach to reconstruct 4D global hand motion from monocular videos recorded by dynamic cameras in the wild.<n>We show that our approach significantly outperforms state-of-the-art methods in terms of 4D global mesh recovery.<n>This establishes a new benchmark for hand motion reconstruction from monocular video with moving cameras.
arXiv Detail & Related papers (2024-12-17T12:43:10Z) - CRiM-GS: Continuous Rigid Motion-Aware Gaussian Splatting from Motion Blur Images [12.603775893040972]
We propose continuous rigid motion-aware gaussian splatting (CRiM-GS) to reconstruct accurate 3D scene from blurry images with real-time rendering speed.
We leverage rigid body transformations to model the camera motion with proper regularization, preserving the shape and size of the object.
Furthermore, we introduce a continuous deformable 3D transformation in the textitSE(3) field to adapt the rigid body transformation to real-world problems.
arXiv Detail & Related papers (2024-07-04T13:37:04Z) - Deblurring Neural Radiance Fields with Event-driven Bundle Adjustment [23.15130387716121]
We propose Bundle Adjustment for Deblurring Neural Radiance Fields (EBAD-NeRF) to jointly optimize the learnable poses and NeRF parameters.
EBAD-NeRF can obtain accurate camera trajectory during the exposure time and learn a sharper 3D representations compared to prior works.
arXiv Detail & Related papers (2024-06-20T14:33:51Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video [18.424138608823267]
We propose DyBluRF, a dynamic radiance field approach that synthesizes sharp novel views from a monocular video affected by motion blur.
To account for motion blur in input images, we simultaneously capture the camera trajectory and object Discrete Cosine Transform (DCT) trajectories within the scene.
arXiv Detail & Related papers (2024-03-15T08:48:37Z) - Diffusion Priors for Dynamic View Synthesis from Monocular Videos [59.42406064983643]
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos.
We first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique.
We distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields.
arXiv Detail & Related papers (2024-01-10T23:26:41Z) - MoBluRF: Motion Deblurring Neural Radiance Fields for Blurry Monocular Video [25.964642223641057]
MoBluRF is a framework for synthesis of sharp-temporal views in blurry monocular video.<n>In the BRI stage, we reconstruct dynamic 3D scenes and jointly initialize the base rays which are used to predict latent sharp rays.<n>In the MDD stage, we introduce a novel Incremental Latent Sharp-rays Prediction (ILSP) approach for the blurry monocular video frames.
arXiv Detail & Related papers (2023-12-21T02:01:19Z) - DynaMoN: Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields [71.94156412354054]
We propose Dynamic Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields (DynaMoN)
DynaMoN handles dynamic content for initial camera pose estimation and statics-focused ray sampling for fast and accurate novel-view synthesis.
We extensively evaluate our approach on two real-world dynamic datasets, the TUM RGB-D dataset and the BONN RGB-D Dynamic dataset.
arXiv Detail & Related papers (2023-09-16T08:46:59Z) - Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion [67.15935067326662]
Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
arXiv Detail & Related papers (2023-09-15T17:52:08Z) - Robust Dynamic Radiance Fields [79.43526586134163]
Dynamic radiance field reconstruction methods aim to model the time-varying structure and appearance of a dynamic scene.
Existing methods, however, assume that accurate camera poses can be reliably estimated by Structure from Motion (SfM) algorithms.
We address this robustness issue by jointly estimating the static and dynamic radiance fields along with the camera parameters.
arXiv Detail & Related papers (2023-01-05T18:59:51Z) - T\"oRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis [32.878225196378374]
We introduce a neural representation based on an image formation model for continuous-wave ToF cameras.
We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions.
arXiv Detail & Related papers (2021-09-30T17:12:59Z) - Exposure Trajectory Recovery from Motion Blur [90.75092808213371]
Motion blur in dynamic scenes is an important yet challenging research topic.
In this paper, we define exposure trajectories, which represent the motion information contained in a blurry image.
A novel motion offset estimation framework is proposed to model pixel-wise displacements of the latent sharp image.
arXiv Detail & Related papers (2020-10-06T05:23:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.