Towards Real-World Video Deblurring by Exploring Blur Formation Process
- URL: http://arxiv.org/abs/2208.13184v1
- Date: Sun, 28 Aug 2022 09:24:52 GMT
- Title: Towards Real-World Video Deblurring by Exploring Blur Formation Process
- Authors: Mingdeng Cao, Zhihang Zhong, Yanbo Fan, Jiahao Wang, Yong Zhang, Jue
Wang, Yujiu Yang, Yinqiang Zheng
- Abstract summary: In recent years, deep learning-based approaches have achieved promising success on video deblurring task.
The models trained on existing synthetic datasets still suffer from generalization problems over real-world blurry scenarios.
We propose a novel realistic blur synthesis pipeline termed RAW-Blur by leveraging blur formation cues.
- Score: 53.91239555063343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper aims at exploring how to synthesize close-to-real blurs that
existing video deblurring models trained on them can generalize well to
real-world blurry videos. In recent years, deep learning-based approaches have
achieved promising success on video deblurring task. However, the models
trained on existing synthetic datasets still suffer from generalization
problems over real-world blurry scenarios with undesired artifacts. The factors
accounting for the failure remain unknown. Therefore, we revisit the classical
blur synthesis pipeline and figure out the possible reasons, including shooting
parameters, blur formation space, and image signal processor~(ISP). To analyze
the effects of these potential factors, we first collect an ultra-high
frame-rate (940 FPS) RAW video dataset as the data basis to synthesize various
kinds of blurs. Then we propose a novel realistic blur synthesis pipeline
termed as RAW-Blur by leveraging blur formation cues. Through numerous
experiments, we demonstrate that synthesizing blurs in the RAW space and
adopting the same ISP as the real-world testing data can effectively eliminate
the negative effects of synthetic data. Furthermore, the shooting parameters of
the synthesized blurry video, e.g., exposure time and frame-rate play
significant roles in improving the performance of deblurring models.
Impressively, the models trained on the blurry data synthesized by the proposed
RAW-Blur pipeline can obtain more than 5dB PSNR gain against those trained on
the existing synthetic blur datasets. We believe the novel realistic synthesis
pipeline and the corresponding RAW video dataset can help the community to
easily construct customized blur datasets to improve real-world video
deblurring performance largely, instead of laboriously collecting real data
pairs.
Related papers
- GS-Blur: A 3D Scene-Based Dataset for Realistic Image Deblurring [50.72230109855628]
We propose GS-Blur, a dataset of synthesized realistic blurry images created using a novel approach.
We first reconstruct 3D scenes from multi-view images using 3D Gaussian Splatting (3DGS), then render blurry images by moving the camera view along the randomly generated motion trajectories.
By adopting various camera trajectories in reconstructing our GS-Blur, our dataset contains realistic and diverse types of blur, offering a large-scale dataset that generalizes well to real-world blur.
arXiv Detail & Related papers (2024-10-31T06:17:16Z) - Expanding Synthetic Real-World Degradations for Blind Video Super
Resolution [3.474523163017713]
Video super-resolution (VSR) techniques have drastically improved over the last few years and shown impressive performance on synthetic data.
However, their performance on real-world video data suffers because of the complexity of real-world degradations and misaligned video frames.
In this paper, we propose real-world degradations on synthetic training datasets.
arXiv Detail & Related papers (2023-05-04T08:58:31Z) - ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real
Novel View Synthesis via Contrastive Learning [102.46382882098847]
We first investigate the effects of synthetic data in synthetic-to-real novel view synthesis.
We propose to introduce geometry-aware contrastive learning to learn multi-view consistent features with geometric constraints.
Our method can render images with higher quality and better fine-grained details, outperforming existing generalizable novel view synthesis methods in terms of PSNR, SSIM, and LPIPS.
arXiv Detail & Related papers (2023-03-20T12:06:14Z) - Rethinking Blur Synthesis for Deep Real-World Image Deblurring [4.00114307523959]
We propose a novel realistic blur synthesis pipeline to simulate the camera imaging process.
We develop an effective deblurring model that captures non-local dependencies and local context in the feature domain simultaneously.
A comprehensive experiment on three real-world datasets shows that the proposed deblurring model performs better than state-of-the-art methods.
arXiv Detail & Related papers (2022-09-28T06:50:16Z) - Leveraging Deepfakes to Close the Domain Gap between Real and Synthetic
Images in Facial Capture Pipelines [8.366597450893456]
We propose an end-to-end pipeline for building and tracking 3D facial models from personalized in-the-wild video data.
We present a method for automatic data curation and retrieval based on a hierarchical clustering framework typical of collision algorithms in traditional computer graphics pipelines.
We outline how we train a motion capture regressor, leveraging the aforementioned techniques to avoid the need for real-world ground truth data.
arXiv Detail & Related papers (2022-04-22T15:09:49Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - Realistic Blur Synthesis for Learning Image Deblurring [20.560205377203957]
We present a novel blur synthesis pipeline that can synthesize more realistic blur.
We also present RSBlur, a novel dataset that contains real blurred images and the corresponding sequences of sharp images.
arXiv Detail & Related papers (2022-02-17T17:14:48Z) - Deblurring by Realistic Blurring [110.54173799114785]
We propose a new method which combines two GAN models, i.e., a learning-to-blurr GAN (BGAN) and learning-to-DeBlur GAN (DBGAN)
The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images.
As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images.
arXiv Detail & Related papers (2020-04-04T05:25:15Z) - CycleISP: Real Image Restoration via Improved Data Synthesis [166.17296369600774]
We present a framework that models camera imaging pipeline in forward and reverse directions.
By training a new image denoising network on realistic synthetic data, we achieve the state-of-the-art performance on real camera benchmark datasets.
arXiv Detail & Related papers (2020-03-17T15:20:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.