Gyroscope-Assisted Motion Deblurring Network
- URL: http://arxiv.org/abs/2402.06854v1
- Date: Sat, 10 Feb 2024 01:30:24 GMT
- Title: Gyroscope-Assisted Motion Deblurring Network
- Authors: Simin Luan, Cong Yang, Zeyd Boukhers, Xue Qin, Dongfeng Cheng, Wei
Sui, Zhijun Li
- Abstract summary: This paper presents a framework to synthetic and restore motion blur images using Inertial Measurement Unit (IMU) data.
The framework includes a strategy for training triplet generation, and a Gyroscope-Aided Motion Deblurring (GAMD) network for blurred image restoration.
- Score: 11.404195533660717
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Image research has shown substantial attention in deblurring networks in
recent years. Yet, their practical usage in real-world deblurring, especially
motion blur, remains limited due to the lack of pixel-aligned training triplets
(background, blurred image, and blur heat map) and restricted information
inherent in blurred images. This paper presents a simple yet efficient
framework to synthetic and restore motion blur images using Inertial
Measurement Unit (IMU) data. Notably, the framework includes a strategy for
training triplet generation, and a Gyroscope-Aided Motion Deblurring (GAMD)
network for blurred image restoration. The rationale is that through harnessing
IMU data, we can determine the transformation of the camera pose during the
image exposure phase, facilitating the deduction of the motion trajectory (aka.
blur trajectory) for each point inside the three-dimensional space. Thus, the
synthetic triplets using our strategy are inherently close to natural motion
blur, strictly pixel-aligned, and mass-producible. Through comprehensive
experiments, we demonstrate the advantages of the proposed framework: only
two-pixel errors between our synthetic and real-world blur trajectories, a
marked improvement (around 33.17%) of the state-of-the-art deblurring method
MIMO on Peak Signal-to-Noise Ratio (PSNR).
Related papers
- GS-Blur: A 3D Scene-Based Dataset for Realistic Image Deblurring [50.72230109855628]
We propose GS-Blur, a dataset of synthesized realistic blurry images created using a novel approach.
We first reconstruct 3D scenes from multi-view images using 3D Gaussian Splatting (3DGS), then render blurry images by moving the camera view along the randomly generated motion trajectories.
By adopting various camera trajectories in reconstructing our GS-Blur, our dataset contains realistic and diverse types of blur, offering a large-scale dataset that generalizes well to real-world blur.
arXiv Detail & Related papers (2024-10-31T06:17:16Z) - Motion Blur Decomposition with Cross-shutter Guidance [33.72961622720793]
Motion blur is an artifact under insufficient illumination where exposure time has to be prolonged so as to collect more photons for a bright enough image.
Recent researches have aimed at decomposing a blurry image into multiple sharp images with spatial and temporal coherence.
We propose to utilize the ordered scanline-wise delay in a rolling shutter image to robustify motion decomposition of a single blurry image.
arXiv Detail & Related papers (2024-04-01T13:55:40Z) - Gaussian Splatting on the Move: Blur and Rolling Shutter Compensation for Natural Camera Motion [25.54868552979793]
We present a method that adapts to camera motion and allows high-quality scene reconstruction with handheld video data.
Our results with both synthetic and real data demonstrate superior performance in mitigating camera motion over existing methods.
arXiv Detail & Related papers (2024-03-20T06:19:41Z) - PASTA: Towards Flexible and Efficient HDR Imaging Via Progressively Aggregated Spatio-Temporal Alignment [91.38256332633544]
PASTA is a Progressively Aggregated Spatio-Temporal Alignment framework for HDR deghosting.
Our approach achieves effectiveness and efficiency by harnessing hierarchical representation during feature distanglement.
Experimental results showcase PASTA's superiority over current SOTA methods in both visual quality and performance metrics.
arXiv Detail & Related papers (2024-03-15T15:05:29Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - Blur Interpolation Transformer for Real-World Motion from Blur [52.10523711510876]
We propose a encoded blur transformer (BiT) to unravel the underlying temporal correlation in blur.
Based on multi-scale residual Swin transformer blocks, we introduce dual-end temporal supervision and temporally symmetric ensembling strategies.
In addition, we design a hybrid camera system to collect the first real-world dataset of one-to-many blur-sharp video pairs.
arXiv Detail & Related papers (2022-11-21T13:10:10Z) - A Constrained Deformable Convolutional Network for Efficient Single
Image Dynamic Scene Blind Deblurring with Spatially-Variant Motion Blur
Kernels Estimation [12.744989551644744]
We propose a novel constrained deformable convolutional network (CDCN) for efficient single image dynamic scene blind deblurring.
CDCN simultaneously achieves accurate spatially-variant motion blur kernels estimation and the high-quality image restoration.
arXiv Detail & Related papers (2022-08-23T03:28:21Z) - Efficient Video Deblurring Guided by Motion Magnitude [37.25713728458234]
We propose a novel framework that utilizes the motion magnitude prior (MMP) as guidance for efficient deep video deblurring.
The MMP consists of both spatial and temporal blur level information, which can be further integrated into an efficient recurrent neural network (RNN) for video deblurring.
arXiv Detail & Related papers (2022-07-27T08:57:48Z) - Blind Motion Deblurring Super-Resolution: When Dynamic Spatio-Temporal
Learning Meets Static Image Understanding [87.5799910153545]
Single-image super-resolution (SR) and multi-frame SR are two ways to super resolve low-resolution images.
Blind Motion Deblurring Super-Reslution Networks is proposed to learn dynamic-temporal information from single static motion-blurred images.
arXiv Detail & Related papers (2021-05-27T11:52:45Z) - Single Image Non-uniform Blur Kernel Estimation via Adaptive Basis
Decomposition [1.854931308524932]
We propose a general, non-parametric model for dense non-uniform motion blur estimation.
We show that our method overcomes the limitations of existing non-uniform motion blur estimation.
arXiv Detail & Related papers (2021-02-01T18:02:31Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.