FCL-GAN: A Lightweight and Real-Time Baseline for Unsupervised Blind
Image Deblurring
- URL: http://arxiv.org/abs/2204.07820v1
- Date: Sat, 16 Apr 2022 15:08:03 GMT
- Title: FCL-GAN: A Lightweight and Real-Time Baseline for Unsupervised Blind
Image Deblurring
- Authors: Suiyi Zhao, Zhao Zhang, Richang Hong, Mingliang Xu, Yi Yang, Meng Wang
- Abstract summary: We propose a lightweight and real-time unsupervised BID baseline, termed Frequency-domain Contrastive Loss Constrained Lightweight CycleGAN.
FCL-GAN has attractive properties, i.e., no image domain limitation, no image resolution limitation, 25x lighter than SOTA, and 5x faster than SOTA.
Experiments on several image datasets demonstrate the effectiveness of FCL-GAN in terms of performance, model size and reference time.
- Score: 72.43250555622254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blind image deblurring (BID) remains a challenging and significant task.
Benefiting from the strong fitting ability of deep learning, paired data-driven
supervised BID method has obtained great progress. However, paired data are
usually synthesized by hand, and the realistic blurs are more complex than
synthetic ones, which makes the supervised methods inept at modeling realistic
blurs and hinders their real-world applications. As such, unsupervised deep BID
method without paired data offers certain advantages, but current methods still
suffer from some drawbacks, e.g., bulky model size, long inference time, and
strict image resolution and domain requirements. In this paper, we propose a
lightweight and real-time unsupervised BID baseline, termed Frequency-domain
Contrastive Loss Constrained Lightweight CycleGAN (shortly, FCL-GAN), with
attractive properties, i.e., no image domain limitation, no image resolution
limitation, 25x lighter than SOTA, and 5x faster than SOTA. To guarantee the
lightweight property and performance superiority, two new collaboration units
called lightweight domain conversion unit(LDCU) and parameter-free
frequency-domain contrastive unit(PFCU) are designed. LDCU mainly implements
inter-domain conversion in lightweight manner. PFCU further explores the
similarity measure, external difference and internal connection between the
blurred domain and sharp domain images in frequency domain, without involving
extra parameters. Extensive experiments on several image datasets demonstrate
the effectiveness of our FCL-GAN in terms of performance, model size and
reference time.
Related papers
- WTCL-Dehaze: Rethinking Real-world Image Dehazing via Wavelet Transform and Contrastive Learning [17.129068060454255]
Single image dehazing is essential for applications such as autonomous driving and surveillance.
We propose an enhanced semi-supervised dehazing network that integrates Contrastive Loss and Discrete Wavelet Transform.
Our proposed algorithm achieves superior performance and improved robustness compared to state-of-the-art single image dehazing methods.
arXiv Detail & Related papers (2024-10-07T05:36:11Z) - Misalignment-Robust Frequency Distribution Loss for Image Transformation [51.0462138717502]
This paper aims to address a common challenge in deep learning-based image transformation methods, such as image enhancement and super-resolution.
We introduce a novel and simple Frequency Distribution Loss (FDL) for computing distribution distance within the frequency domain.
Our method is empirically proven effective as a training constraint due to the thoughtful utilization of global information in the frequency domain.
arXiv Detail & Related papers (2024-02-28T09:27:41Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Spatial-Frequency Attention for Image Denoising [22.993509525990998]
We propose the spatial-frequency attention network (SFANet) to enhance the network's ability in exploiting long-range dependency.
Experiments on multiple denoising benchmarks demonstrate the leading performance of SFANet network.
arXiv Detail & Related papers (2023-02-27T09:07:15Z) - Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and
Transformer-Based Method [51.30748775681917]
We consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution.
We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms.
As a second contribution, we introduce LLFormer, a transformer-based low-light enhancement method.
arXiv Detail & Related papers (2022-12-22T09:05:07Z) - GDIP: Gated Differentiable Image Processing for Object-Detection in
Adverse Conditions [15.327704761260131]
We present a Gated Differentiable Image Processing (GDIP) block, a domain-agnostic network architecture.
Our proposed GDIP block learns to enhance images directly through the downstream object detection loss.
We demonstrate significant improvement in detection performance over several state-of-the-art methods.
arXiv Detail & Related papers (2022-09-29T16:43:13Z) - Efficient and Degradation-Adaptive Network for Real-World Image
Super-Resolution [28.00231586840797]
Real-world image super-resolution (Real-ISR) is a challenging task due to the unknown complex degradation of real-world images.
Recent research on Real-ISR has achieved significant progress by modeling the image degradation space.
We propose an efficient degradation-adaptive super-resolution (DASR) network, whose parameters are adaptively specified by estimating the degradation of each input image.
arXiv Detail & Related papers (2022-03-27T05:59:13Z) - Asymmetric CNN for image super-resolution [102.96131810686231]
Deep convolutional neural networks (CNNs) have been widely applied for low-level vision over the past five years.
We propose an asymmetric CNN (ACNet) comprising an asymmetric block (AB), a mem?ory enhancement block (MEB) and a high-frequency feature enhancement block (HFFEB) for image super-resolution.
Our ACNet can effectively address single image super-resolution (SISR), blind SISR and blind SISR of blind noise problems.
arXiv Detail & Related papers (2021-03-25T07:10:46Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.