Depth-Aware Image Compositing Model for Parallax Camera Motion Blur
- URL: http://arxiv.org/abs/2303.09334v2
- Date: Thu, 30 Mar 2023 05:56:40 GMT
- Title: Depth-Aware Image Compositing Model for Parallax Camera Motion Blur
- Authors: German F. Torres, Joni-Kristian K\"am\"ar\"ainen
- Abstract summary: Camera motion introduces spatially varying blur due to the depth changes in the 3D world.
We present a simple, yet accurate, Image ting Blur (ICB) model for depth-dependent varying blur.
- Score: 4.170640862518009
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Camera motion introduces spatially varying blur due to the depth changes in
the 3D world. This work investigates scene configurations where such blur is
produced under parallax camera motion. We present a simple, yet accurate, Image
Compositing Blur (ICB) model for depth-dependent spatially varying blur. The
(forward) model produces realistic motion blur from a single image, depth map,
and camera trajectory. Furthermore, we utilize the ICB model, combined with a
coordinate-based MLP, to learn a sharp neural representation from the blurred
input. Experimental results are reported for synthetic and real examples. The
results verify that the ICB forward model is computationally efficient and
produces realistic blur, despite the lack of occlusion information.
Additionally, our method for restoring a sharp representation proves to be a
competitive approach for the deblurring task.
Related papers
- MBA-SLAM: Motion Blur Aware Dense Visual SLAM with Radiance Fields Representation [15.752529196306648]
We propose a dense visual SLAM pipeline (i.e. MBA-SLAM) to handle severe motion-blurred inputs.
Our approach integrates an efficient motion blur-aware tracker with either neural fields or Gaussian Splatting based mapper.
We show that MBA-SLAM surpasses previous state-of-the-art methods in both camera localization and map reconstruction.
arXiv Detail & Related papers (2024-11-13T01:38:06Z) - GS-Blur: A 3D Scene-Based Dataset for Realistic Image Deblurring [50.72230109855628]
We propose GS-Blur, a dataset of synthesized realistic blurry images created using a novel approach.
We first reconstruct 3D scenes from multi-view images using 3D Gaussian Splatting (3DGS), then render blurry images by moving the camera view along the randomly generated motion trajectories.
By adopting various camera trajectories in reconstructing our GS-Blur, our dataset contains realistic and diverse types of blur, offering a large-scale dataset that generalizes well to real-world blur.
arXiv Detail & Related papers (2024-10-31T06:17:16Z) - CRiM-GS: Continuous Rigid Motion-Aware Gaussian Splatting from Motion Blur Images [12.603775893040972]
We propose continuous rigid motion-aware gaussian splatting (CRiM-GS) to reconstruct accurate 3D scene from blurry images with real-time rendering speed.
We leverage rigid body transformations to model the camera motion with proper regularization, preserving the shape and size of the object.
Furthermore, we introduce a continuous deformable 3D transformation in the textitSE(3) field to adapt the rigid body transformation to real-world problems.
arXiv Detail & Related papers (2024-07-04T13:37:04Z) - Gaussian Splatting on the Move: Blur and Rolling Shutter Compensation for Natural Camera Motion [25.54868552979793]
We present a method that adapts to camera motion and allows high-quality scene reconstruction with handheld video data.
Our results with both synthetic and real data demonstrate superior performance in mitigating camera motion over existing methods.
arXiv Detail & Related papers (2024-03-20T06:19:41Z) - DO3D: Self-supervised Learning of Decomposed Object-aware 3D Motion and
Depth from Monocular Videos [76.01906393673897]
We propose a self-supervised method to jointly learn 3D motion and depth from monocular videos.
Our system contains a depth estimation module to predict depth, and a new decomposed object-wise 3D motion (DO3D) estimation module to predict ego-motion and 3D object motion.
Our model delivers superior performance in all evaluated settings.
arXiv Detail & Related papers (2024-03-09T12:22:46Z) - Diff-DOPE: Differentiable Deep Object Pose Estimation [29.703385848843414]
We introduce Diff-DOPE, a 6-DoF pose refiner that takes as input an image, a 3D textured model of an object, and an initial pose of the object.
The method uses differentiable rendering to update the object pose to minimize the visual error between the image and the projection of the model.
We show that this simple, yet effective, idea is able to achieve state-of-the-art results on pose estimation datasets.
arXiv Detail & Related papers (2023-09-30T18:52:57Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - MoDA: Modeling Deformable 3D Objects from Casual Videos [84.29654142118018]
We propose neural dual quaternion blend skinning (NeuDBS) to achieve 3D point deformation without skin-collapsing artifacts.
In the endeavor to register 2D pixels across different frames, we establish a correspondence between canonical feature embeddings that encodes 3D points within the canonical space.
Our approach can reconstruct 3D models for humans and animals with better qualitative and quantitative performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-17T13:49:04Z) - Multi-View Object Pose Refinement With Differentiable Renderer [22.040014384283378]
This paper introduces a novel multi-view 6 DoF object pose refinement approach focusing on improving methods trained on synthetic data.
It is based on the DPOD detector, which produces dense 2D-3D correspondences between the model vertices and the image pixels in each frame.
We report excellent performance in comparison to the state-of-the-art methods trained on the synthetic and real data.
arXiv Detail & Related papers (2022-07-06T17:02:22Z) - Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving
Objects [115.71874459429381]
We address the novel task of jointly reconstructing the 3D shape, texture, and motion of an object from a single motion-blurred image.
While previous approaches address the deblurring problem only in the 2D image domain, our proposed rigorous modeling of all object properties in the 3D domain enables the correct description of arbitrary object motion.
arXiv Detail & Related papers (2021-06-16T13:18:08Z) - Deblurring by Realistic Blurring [110.54173799114785]
We propose a new method which combines two GAN models, i.e., a learning-to-blurr GAN (BGAN) and learning-to-DeBlur GAN (DBGAN)
The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images.
As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images.
arXiv Detail & Related papers (2020-04-04T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.