Image as an IMU: Estimating Camera Motion from a Single Motion-Blurred Image
- URL: http://arxiv.org/abs/2503.17358v3
- Date: Tue, 01 Apr 2025 09:58:06 GMT
- Title: Image as an IMU: Estimating Camera Motion from a Single Motion-Blurred Image
- Authors: Jerred Chen, Ronald Clark,
- Abstract summary: We propose a novel framework that leverages motion blur as a rich cue for motion estimation.<n>Our approach works by predicting a dense motion flow field and a monocular depth map directly from a single motion-blurred image.<n>Our method produces an IMU-like measurement that robustly captures fast and aggressive camera movements.
- Score: 14.485182089870928
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In many robotics and VR/AR applications, fast camera motions cause a high level of motion blur, causing existing camera pose estimation methods to fail. In this work, we propose a novel framework that leverages motion blur as a rich cue for motion estimation rather than treating it as an unwanted artifact. Our approach works by predicting a dense motion flow field and a monocular depth map directly from a single motion-blurred image. We then recover the instantaneous camera velocity by solving a linear least squares problem under the small motion assumption. In essence, our method produces an IMU-like measurement that robustly captures fast and aggressive camera movements. To train our model, we construct a large-scale dataset with realistic synthetic motion blur derived from ScanNet++v2 and further refine our model by training end-to-end on real data using our fully differentiable pipeline. Extensive evaluations on real-world benchmarks demonstrate that our method achieves state-of-the-art angular and translational velocity estimates, outperforming current methods like MASt3R and COLMAP.
Related papers
- FRAME: Floor-aligned Representation for Avatar Motion from Egocentric Video [52.33896173943054]
Egocentric motion capture with a head-mounted body-facing stereo camera is crucial for VR and AR applications.
Existing methods rely on synthetic pretraining and struggle to generate smooth and accurate predictions in real-world settings.
We propose FRAME, a simple yet effective architecture that combines device pose and camera feeds for state-of-the-art body pose prediction.
arXiv Detail & Related papers (2025-03-29T14:26:06Z) - CoMoGaussian: Continuous Motion-Aware Gaussian Splatting from Motion-Blurred Images [19.08403715388913]
A critical issue is the camera motion blur caused by movement during exposure, which hinders accurate 3D scene reconstruction.<n>We propose CoMoGaussian, a Continuous Motion-Aware Gaussian Splatting that reconstructs precise 3D scenes from motion-red images.
arXiv Detail & Related papers (2025-03-07T11:18:43Z) - Image Motion Blur Removal in the Temporal Dimension with Video Diffusion Models [3.052019331122618]
We propose a novel single-image deblurring approach that treats motion blur as a temporal averaging phenomenon.
Our core innovation lies in leveraging a pre-trained video diffusion transformer model to capture diverse motion dynamics.
Empirical results on synthetic and real-world datasets demonstrate that our method outperforms existing techniques in deblurring complex motion blur scenarios.
arXiv Detail & Related papers (2025-01-22T03:01:54Z) - CRiM-GS: Continuous Rigid Motion-Aware Gaussian Splatting from Motion-Blurred Images [14.738528284246545]
CRiM-GS is a textbfContinuous textbfRigid textbfMotion-aware textbfGaussian textbfSplatting.<n>It reconstructs precise 3D scenes from motion-blurred images while maintaining real-time rendering speed.
arXiv Detail & Related papers (2024-07-04T13:37:04Z) - Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - Tracking Everything Everywhere All at Once [111.00807055441028]
We present a new test-time optimization method for estimating dense and long-range motion from a video sequence.
We propose a complete and globally consistent motion representation, dubbed OmniMotion.
Our approach outperforms prior state-of-the-art methods by a large margin both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-06-08T17:59:29Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - MBA-VO: Motion Blur Aware Visual Odometry [99.56896875807635]
Motion blur is one of the major challenges remaining for visual odometry methods.
In low-light conditions where longer exposure times are necessary, motion blur can appear even for relatively slow camera motions.
We present a novel hybrid visual odometry pipeline with direct approach that explicitly models and estimates the camera's local trajectory within the exposure time.
arXiv Detail & Related papers (2021-03-25T09:02:56Z) - Single Image Non-uniform Blur Kernel Estimation via Adaptive Basis
Decomposition [1.854931308524932]
We propose a general, non-parametric model for dense non-uniform motion blur estimation.
We show that our method overcomes the limitations of existing non-uniform motion blur estimation.
arXiv Detail & Related papers (2021-02-01T18:02:31Z) - Exposure Trajectory Recovery from Motion Blur [90.75092808213371]
Motion blur in dynamic scenes is an important yet challenging research topic.
In this paper, we define exposure trajectories, which represent the motion information contained in a blurry image.
A novel motion offset estimation framework is proposed to model pixel-wise displacements of the latent sharp image.
arXiv Detail & Related papers (2020-10-06T05:23:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.