Rethinking Blur Synthesis for Deep Real-World Image Deblurring
- URL: http://arxiv.org/abs/2209.13866v1
- Date: Wed, 28 Sep 2022 06:50:16 GMT
- Title: Rethinking Blur Synthesis for Deep Real-World Image Deblurring
- Authors: Hao Wei, Chenyang Ge, Xin Qiao, Pengchao Deng
- Abstract summary: We propose a novel realistic blur synthesis pipeline to simulate the camera imaging process.
We develop an effective deblurring model that captures non-local dependencies and local context in the feature domain simultaneously.
A comprehensive experiment on three real-world datasets shows that the proposed deblurring model performs better than state-of-the-art methods.
- Score: 4.00114307523959
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we examine the problem of real-world image deblurring and take
into account two key factors for improving the performance of the deep image
deblurring model, namely, training data synthesis and network architecture
design. Deblurring models trained on existing synthetic datasets perform poorly
on real blurry images due to domain shift. To reduce the domain gap between
synthetic and real domains, we propose a novel realistic blur synthesis
pipeline to simulate the camera imaging process. As a result of our proposed
synthesis method, existing deblurring models could be made more robust to
handle real-world blur. Furthermore, we develop an effective deblurring model
that captures non-local dependencies and local context in the feature domain
simultaneously. Specifically, we introduce the multi-path transformer module to
UNet architecture for enriched multi-scale features learning. A comprehensive
experiment on three real-world datasets shows that the proposed deblurring
model performs better than state-of-the-art methods.
Related papers
- Teacher-Student Network for Real-World Face Super-Resolution with Progressive Embedding of Edge Information [2.280954956645056]
A real-world face super-resolution teacher-student model is proposed, which considers the domain gap between real and synthetic data.
Our proposed approach surpasses state-of-the-art methods in obtaining high-quality face images for real-world FSR.
arXiv Detail & Related papers (2024-05-08T02:48:52Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - WinSyn: A High Resolution Testbed for Synthetic Data [41.11481327112564]
We present WinSyn, a unique dataset and testbed for creating high-quality synthetic data with procedural modeling techniques.
The dataset contains high-resolution photographs of windows, selected from locations around the world, with 89,318 individual window crops showcasing diverse geometric and material characteristics.
We evaluate a procedural model by training semantic segmentation networks on both synthetic and real images and then comparing their performances on a shared test set of real images.
arXiv Detail & Related papers (2023-10-09T20:18:10Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Domain Adaptation of Synthetic Driving Datasets for Real-World
Autonomous Driving [0.11470070927586014]
Network trained with synthetic data for certain computer vision tasks degrade significantly when tested on real world data.
In this paper, we propose and evaluate novel ways for the betterment of such approaches.
We propose a novel method to efficiently incorporate semantic supervision into this pair selection, which helps in boosting the performance of the model.
arXiv Detail & Related papers (2023-02-08T15:51:54Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - Blur Interpolation Transformer for Real-World Motion from Blur [52.10523711510876]
We propose a encoded blur transformer (BiT) to unravel the underlying temporal correlation in blur.
Based on multi-scale residual Swin transformer blocks, we introduce dual-end temporal supervision and temporally symmetric ensembling strategies.
In addition, we design a hybrid camera system to collect the first real-world dataset of one-to-many blur-sharp video pairs.
arXiv Detail & Related papers (2022-11-21T13:10:10Z) - Towards Real-World Video Deblurring by Exploring Blur Formation Process [53.91239555063343]
In recent years, deep learning-based approaches have achieved promising success on video deblurring task.
The models trained on existing synthetic datasets still suffer from generalization problems over real-world blurry scenarios.
We propose a novel realistic blur synthesis pipeline termed RAW-Blur by leveraging blur formation cues.
arXiv Detail & Related papers (2022-08-28T09:24:52Z) - Realistic Blur Synthesis for Learning Image Deblurring [20.560205377203957]
We present a novel blur synthesis pipeline that can synthesize more realistic blur.
We also present RSBlur, a novel dataset that contains real blurred images and the corresponding sequences of sharp images.
arXiv Detail & Related papers (2022-02-17T17:14:48Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.