Image deblurring based on lightweight multi-information fusion network
- URL: http://arxiv.org/abs/2101.05403v1
- Date: Thu, 14 Jan 2021 00:37:37 GMT
- Title: Image deblurring based on lightweight multi-information fusion network
- Authors: Yanni Zhang, Yiming Liu, Qiang Li, Miao Qi, Dahong Xu, Jun Kong,
Jianzhong Wang
- Abstract summary: We propose a lightweight multiinformation fusion network (LMFN) for image deblurring.
In the encoding stage, the image feature is reduced to various smallscale spaces for multi-scale information extraction and fusion.
Then, a distillation network is used in the decoding stage, which allows the network benefit the most from residual learning.
Our network can achieve state-of-the-art image deblurring result with smaller number of parameters and outperforms existing methods in model complexity.
- Score: 6.848061582669787
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, deep learning based image deblurring has been well developed.
However, exploiting the detailed image features in a deep learning framework
always requires a mass of parameters, which inevitably makes the network suffer
from high computational burden. To solve this problem, we propose a lightweight
multiinformation fusion network (LMFN) for image deblurring. The proposed LMFN
is designed as an encoder-decoder architecture. In the encoding stage, the
image feature is reduced to various smallscale spaces for multi-scale
information extraction and fusion without a large amount of information loss.
Then, a distillation network is used in the decoding stage, which allows the
network benefit the most from residual learning while remaining sufficiently
lightweight. Meanwhile, an information fusion strategy between distillation
modules and feature channels is also carried out by attention mechanism.
Through fusing different information in the proposed approach, our network can
achieve state-of-the-art image deblurring result with smaller number of
parameters and outperforms existing methods in model complexity.
Related papers
- Efficient Multi-scale Network with Learnable Discrete Wavelet Transform for Blind Motion Deblurring [25.36888929483233]
We propose a multi-scale network based on single-input and multiple-outputs(SIMO) for motion deblurring.
We combine the characteristics of real-world trajectories with a learnable wavelet transform module to focus on the directional continuity and frequency features of the step-by-step transitions between blurred images to sharp images.
arXiv Detail & Related papers (2023-12-29T02:59:40Z) - LRRNet: A Novel Representation Learning Guided Fusion Network for
Infrared and Visible Images [98.36300655482196]
We formulate the fusion task mathematically, and establish a connection between its optimal solution and the network architecture that can implement it.
In particular we adopt a learnable representation approach to the fusion task, in which the construction of the fusion network architecture is guided by the optimisation algorithm producing the learnable model.
Based on this novel network architecture, an end-to-end lightweight fusion network is constructed to fuse infrared and visible light images.
arXiv Detail & Related papers (2023-04-11T12:11:23Z) - CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion [138.40422469153145]
We propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network.
We show that CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2022-11-26T02:40:28Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Multi-scale frequency separation network for image deblurring [10.511076996096117]
We present a new method called multi-scale frequency separation network (MSFS-Net) for image deblurring.
MSFS-Net captures the low and high-frequency information of image at multiple scales.
Experiments on benchmark datasets show that the proposed network achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-06-01T23:48:35Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Lightweight Image Super-Resolution with Multi-scale Feature Interaction
Network [15.846394239848959]
We present a lightweight multi-scale feature interaction network (MSFIN)
For lightweight SISR, MSFIN expands the receptive field and adequately exploits the informative features of the low-resolution observed images.
Our proposed MSFIN can achieve comparable performance against the state-of-the-arts with a more lightweight model.
arXiv Detail & Related papers (2021-03-24T07:25:21Z) - Deep Burst Super-Resolution [165.90445859851448]
We propose a novel architecture for the burst super-resolution task.
Our network takes multiple noisy RAW images as input, and generates a denoised, super-resolved RGB image as output.
In order to enable training and evaluation on real-world data, we additionally introduce the BurstSR dataset.
arXiv Detail & Related papers (2021-01-26T18:57:21Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.