Gated Fusion Network for Degraded Image Super Resolution
- URL: http://arxiv.org/abs/2003.00893v2
- Date: Wed, 4 Mar 2020 10:47:16 GMT
- Title: Gated Fusion Network for Degraded Image Super Resolution
- Authors: Xinyi Zhang, Hang Dong, Zhe Hu, Wei-Sheng Lai, Fei Wang, Ming-Hsuan
Yang
- Abstract summary: We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
- Score: 78.67168802945069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single image super resolution aims to enhance image quality with respect to
spatial content, which is a fundamental task in computer vision. In this work,
we address the task of single frame super resolution with the presence of image
degradation, e.g., blur, haze, or rain streaks. Due to the limitations of frame
capturing and formation processes, image degradation is inevitable, and the
artifacts would be exacerbated by super resolution methods. To address this
problem, we propose a dual-branch convolutional neural network to extract base
features and recovered features separately. The base features contain local and
global information of the input image. On the other hand, the recovered
features focus on the degraded regions and are used to remove the degradation.
Those features are then fused through a recursive gate module to obtain sharp
features for super resolution. By decomposing the feature extraction step into
two task-independent streams, the dual-branch model can facilitate the training
process by avoiding learning the mixed degradation all-in-one and thus enhance
the final high-resolution prediction results. We evaluate the proposed method
in three degradation scenarios. Experiments on these scenarios demonstrate that
the proposed method performs more efficiently and favorably against the
state-of-the-art approaches on benchmark datasets.
Related papers
- Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration [19.87693298262894]
We propose Diff-Restorer, a universal image restoration method based on the diffusion model.
We utilize the pre-trained visual language model to extract visual prompts from degraded images.
We also design a Degradation-aware Decoder to perform structural correction and convert the latent code to the pixel domain.
arXiv Detail & Related papers (2024-07-04T05:01:10Z) - Gated Multi-Resolution Transfer Network for Burst Restoration and
Enhancement [75.25451566988565]
We propose a novel Gated Multi-Resolution Transfer Network (GMTNet) to reconstruct a spatially precise high-quality image from a burst of low-quality raw images.
Detailed experimental analysis on five datasets validates our approach and sets a state-of-the-art for burst super-resolution, burst denoising, and low-light burst enhancement.
arXiv Detail & Related papers (2023-04-13T17:54:00Z) - Multi-Modal and Multi-Resolution Data Fusion for High-Resolution Cloud Removal: A Novel Baseline and Benchmark [21.255966041023083]
We introduce M3R-CR, a benchmark dataset for high-resolution Cloud Removal with Multi-Modal and Multi-Resolution data fusion.
We consider the problem of cloud removal in high-resolution optical remote sensing imagery by integrating multi-modal and multi-resolution information.
We design a new baseline named Align-CR to perform the low-resolution SAR image guided high-resolution optical image cloud removal.
arXiv Detail & Related papers (2023-01-09T15:31:28Z) - Invertible Rescaling Network and Its Extensions [118.72015270085535]
In this work, we propose a novel invertible framework to model the bidirectional degradation and restoration from a new perspective.
We develop invertible models to generate valid degraded images and transform the distribution of lost contents.
Then restoration is made tractable by applying the inverse transformation on the generated degraded image together with a randomly-drawn latent variable.
arXiv Detail & Related papers (2022-10-09T06:58:58Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Dual Perceptual Loss for Single Image Super-Resolution Using ESRGAN [13.335546116599494]
This paper proposes a method called Dual Perceptual Loss (DP Loss) to replace the original perceptual loss to solve the problem of single image super-resolution reconstruction.
Due to the complementary property between the VGG features and the ResNet features, the proposed DP Loss considers the advantages of learning two features simultaneously.
The qualitative and quantitative analysis on benchmark datasets demonstrates the superiority of our proposed method over state-of-the-art super-resolution methods.
arXiv Detail & Related papers (2022-01-17T12:42:56Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.