Realistic Blur Synthesis for Learning Image Deblurring
- URL: http://arxiv.org/abs/2202.08771v1
- Date: Thu, 17 Feb 2022 17:14:48 GMT
- Title: Realistic Blur Synthesis for Learning Image Deblurring
- Authors: Jaesung Rim, Geonung Kim, Jungeon Kim, Junyong Lee, Seungyong Lee,
Sunghyun Cho
- Abstract summary: We present a novel blur synthesis pipeline that can synthesize more realistic blur.
We also present RSBlur, a novel dataset that contains real blurred images and the corresponding sequences of sharp images.
- Score: 20.560205377203957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training learning-based deblurring methods demands a significant amount of
blurred and sharp image pairs. Unfortunately, existing synthetic datasets are
not realistic enough, and existing real-world blur datasets provide limited
diversity of scenes and camera settings. As a result, deblurring models trained
on them still suffer from the lack of generalization ability for handling real
blurred images. In this paper, we analyze various factors that introduce
differences between real and synthetic blurred images, and present a novel blur
synthesis pipeline that can synthesize more realistic blur. We also present
RSBlur, a novel dataset that contains real blurred images and the corresponding
sequences of sharp images. The RSBlur dataset can be used for generating
synthetic blurred images to enable detailed analysis on the differences between
real and synthetic blur. With our blur synthesis pipeline and RSBlur dataset,
we reveal the effects of different factors in the blur synthesis. We also show
that our synthesis method can improve the deblurring performance on real
blurred images.
Related papers
- GS-Blur: A 3D Scene-Based Dataset for Realistic Image Deblurring [50.72230109855628]
We propose GS-Blur, a dataset of synthesized realistic blurry images created using a novel approach.
We first reconstruct 3D scenes from multi-view images using 3D Gaussian Splatting (3DGS), then render blurry images by moving the camera view along the randomly generated motion trajectories.
By adopting various camera trajectories in reconstructing our GS-Blur, our dataset contains realistic and diverse types of blur, offering a large-scale dataset that generalizes well to real-world blur.
arXiv Detail & Related papers (2024-10-31T06:17:16Z) - Bridging Synthetic and Real Worlds for Pre-training Scene Text Detectors [54.80516786370663]
FreeReal is a real-domain-aligned pre-training paradigm that enables the complementary strengths of LSD and real data.
GlyphMix embeds synthetic images as graffiti-like units onto real images.
FreeReal consistently outperforms previous pre-training methods by a substantial margin across four public datasets.
arXiv Detail & Related papers (2023-12-08T15:10:55Z) - ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real
Novel View Synthesis via Contrastive Learning [102.46382882098847]
We first investigate the effects of synthetic data in synthetic-to-real novel view synthesis.
We propose to introduce geometry-aware contrastive learning to learn multi-view consistent features with geometric constraints.
Our method can render images with higher quality and better fine-grained details, outperforming existing generalizable novel view synthesis methods in terms of PSNR, SSIM, and LPIPS.
arXiv Detail & Related papers (2023-03-20T12:06:14Z) - Rethinking Blur Synthesis for Deep Real-World Image Deblurring [4.00114307523959]
We propose a novel realistic blur synthesis pipeline to simulate the camera imaging process.
We develop an effective deblurring model that captures non-local dependencies and local context in the feature domain simultaneously.
A comprehensive experiment on three real-world datasets shows that the proposed deblurring model performs better than state-of-the-art methods.
arXiv Detail & Related papers (2022-09-28T06:50:16Z) - Towards Real-World Video Deblurring by Exploring Blur Formation Process [53.91239555063343]
In recent years, deep learning-based approaches have achieved promising success on video deblurring task.
The models trained on existing synthetic datasets still suffer from generalization problems over real-world blurry scenarios.
We propose a novel realistic blur synthesis pipeline termed RAW-Blur by leveraging blur formation cues.
arXiv Detail & Related papers (2022-08-28T09:24:52Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Can Synthetic Data Improve Object Detection Results for Remote Sensing
Images? [15.466412729455874]
We propose the use of realistic synthetic data with a wide distribution to improve the performance of remote sensing image aircraft detection.
We randomly set the parameters during rendering, such as the size of the instance and the class of background images.
In order to make the synthetic images more realistic, we refine the synthetic images at the pixel level using CycleGAN with real unlabeled images.
arXiv Detail & Related papers (2020-06-09T02:23:22Z) - Deblurring by Realistic Blurring [110.54173799114785]
We propose a new method which combines two GAN models, i.e., a learning-to-blurr GAN (BGAN) and learning-to-DeBlur GAN (DBGAN)
The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images.
As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images.
arXiv Detail & Related papers (2020-04-04T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.