Recovering 3D Shapes from Ultra-Fast Motion-Blurred Images
- URL: http://arxiv.org/abs/2602.07860v1
- Date: Sun, 08 Feb 2026 08:17:35 GMT
- Title: Recovering 3D Shapes from Ultra-Fast Motion-Blurred Images
- Authors: Fei Yu, Shudan Guo, Shiqing Xin, Beibei Wang, Haisen Zhao, Wenzheng Chen,
- Abstract summary: In this paper, we propose a novel inverse rendering approach for shape recovery from ultra-fast motion-blurred images.<n>To address this, we propose a fast barycentric coordinate solver, which significantly reduces computational overhead and achieves a speedup of up to 4.57x.<n>Our method is fully differentiable, allowing gradients to propagate from rendered images to the underlying 3D shape, thereby facilitating shape recovery through inverse rendering.
- Score: 25.077820613486733
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We consider the problem of 3D shape recovery from ultra-fast motion-blurred images. While 3D reconstruction from static images has been extensively studied, recovering geometry from extreme motion-blurred images remains challenging. Such scenarios frequently occur in both natural and industrial settings, such as fast-moving objects in sports (e.g., balls) or rotating machinery, where rapid motion distorts object appearance and makes traditional 3D reconstruction techniques like Multi-View Stereo (MVS) ineffective. In this paper, we propose a novel inverse rendering approach for shape recovery from ultra-fast motion-blurred images. While conventional rendering techniques typically synthesize blur by averaging across multiple frames, we identify a major computational bottleneck in the repeated computation of barycentric weights. To address this, we propose a fast barycentric coordinate solver, which significantly reduces computational overhead and achieves a speedup of up to 4.57x, enabling efficient and photorealistic simulation of high-speed motion. Crucially, our method is fully differentiable, allowing gradients to propagate from rendered images to the underlying 3D shape, thereby facilitating shape recovery through inverse rendering. We validate our approach on two representative motion types: rapid translation and rotation. Experimental results demonstrate that our method enables efficient and realistic modeling of ultra-fast moving objects in the forward simulation. Moreover, it successfully recovers 3D shapes from 2D imagery of objects undergoing extreme translational and rotational motion, advancing the boundaries of vision-based 3D reconstruction. Project page: https://maxmilite.github.io/rec-from-ultrafast-blur/
Related papers
- Layered Motion Fusion: Lifting Motion Segmentation to 3D in Egocentric Videos [71.24593306228145]
We propose to improve dynamic segmentation in 3D by fusing motion segmentation predictions from a 2D-based model into layered radiance fields.<n>We address this issue through test-time refinement, which helps the model to focus on specific frames, thereby reducing the data complexity.<n>This demonstrates that 3D techniques can enhance 2D analysis even for dynamic phenomena in a challenging and realistic setting.
arXiv Detail & Related papers (2025-06-05T19:46:48Z) - HORT: Monocular Hand-held Objects Reconstruction with Transformers [61.36376511119355]
Reconstructing hand-held objects in 3D from monocular images is a significant challenge in computer vision.<n>We propose a transformer-based model to efficiently reconstruct dense 3D point clouds of hand-held objects.<n>Our method achieves state-of-the-art accuracy with much faster inference speed, while generalizing well to in-the-wild images.
arXiv Detail & Related papers (2025-03-27T09:45:09Z) - CoMoGaussian: Continuous Motion-Aware Gaussian Splatting from Motion-Blurred Images [19.08403715388913]
3D Gaussian Splatting has gained significant attention due to its high-quality novel view rendering.<n>A critical issue is the camera motion blur caused by movement during exposure, which hinders accurate 3D scene reconstruction.<n>We propose CoMoGaussian, a Continuous Motion-Aware Gaussian Splatting that reconstructs precise 3D scenes from motion-blurred images.
arXiv Detail & Related papers (2025-03-07T11:18:43Z) - Stereo4D: Learning How Things Move in 3D from Internet Stereo Videos [76.07894127235058]
We present a system for mining high-quality 4D reconstructions from internet stereoscopic, wide-angle videos.<n>We use this method to generate large-scale data in the form of world-consistent, pseudo-metric 3D point clouds.<n>We demonstrate the utility of this data by training a variant of DUSt3R to predict structure and 3D motion from real-world image pairs.
arXiv Detail & Related papers (2024-12-12T18:59:54Z) - ExFMan: Rendering 3D Dynamic Humans with Hybrid Monocular Blurry Frames and Events [7.820081911598502]
We propose ExFMan, the first neural rendering framework that renders high-quality humans in rapid motion with a hybrid frame-based RGB and bio-inspired event camera.
We first formulate a velocity field of the 3D body in the canonical space and render it to image space to identify the body parts with motion blur.
We then propose two novel losses, i.e., velocity-aware photometric loss and velocity-relative event loss, to optimize the neural human for both modalities.
arXiv Detail & Related papers (2024-09-21T10:58:01Z) - CRiM-GS: Continuous Rigid Motion-Aware Gaussian Splatting from Motion-Blurred Images [14.738528284246545]
CRiM-GS is a textbfContinuous textbfRigid textbfMotion-aware textbfGaussian textbfSplatting.<n>It reconstructs precise 3D scenes from motion-blurred images while maintaining real-time rendering speed.
arXiv Detail & Related papers (2024-07-04T13:37:04Z) - SpatialTracker: Tracking Any 2D Pixels in 3D Space [71.58016288648447]
We propose to estimate point trajectories in 3D space to mitigate the issues caused by image projection.
Our method, named SpatialTracker, lifts 2D pixels to 3D using monocular depth estimators.
Tracking in 3D allows us to leverage as-rigid-as-possible (ARAP) constraints while simultaneously learning a rigidity embedding that clusters pixels into different rigid parts.
arXiv Detail & Related papers (2024-04-05T17:59:25Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving
Objects [115.71874459429381]
We address the novel task of jointly reconstructing the 3D shape, texture, and motion of an object from a single motion-blurred image.
While previous approaches address the deblurring problem only in the 2D image domain, our proposed rigorous modeling of all object properties in the 3D domain enables the correct description of arbitrary object motion.
arXiv Detail & Related papers (2021-06-16T13:18:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.