Learning Deep Interleaved Networks with Asymmetric Co-Attention for
Image Restoration
- URL: http://arxiv.org/abs/2010.15689v1
- Date: Thu, 29 Oct 2020 15:32:00 GMT
- Title: Learning Deep Interleaved Networks with Asymmetric Co-Attention for
Image Restoration
- Authors: Feng Li, Runmin Cong, Huihui Bai, Yifan He, Yao Zhao, and Ce Zhu
- Abstract summary: We present a deep interleaved network (DIN) that learns how information at different states should be combined for high-quality (HQ) images reconstruction.
In this paper, we propose asymmetric co-attention (AsyCA) which is attached at each interleaved node to model the feature dependencies.
Our presented DIN can be trained end-to-end and applied to various image restoration tasks.
- Score: 65.11022516031463
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, convolutional neural network (CNN) has demonstrated significant
success for image restoration (IR) tasks (e.g., image super-resolution, image
deblurring, rain streak removal, and dehazing). However, existing CNN based
models are commonly implemented as a single-path stream to enrich feature
representations from low-quality (LQ) input space for final predictions, which
fail to fully incorporate preceding low-level contexts into later high-level
features within networks, thereby producing inferior results. In this paper, we
present a deep interleaved network (DIN) that learns how information at
different states should be combined for high-quality (HQ) images
reconstruction. The proposed DIN follows a multi-path and multi-branch pattern
allowing multiple interconnected branches to interleave and fuse at different
states. In this way, the shallow information can guide deep representative
features prediction to enhance the feature expression ability. Furthermore, we
propose asymmetric co-attention (AsyCA) which is attached at each interleaved
node to model the feature dependencies. Such AsyCA can not only adaptively
emphasize the informative features from different states, but also improves the
discriminative ability of networks. Our presented DIN can be trained end-to-end
and applied to various IR tasks. Comprehensive evaluations on public benchmarks
and real-world datasets demonstrate that the proposed DIN perform favorably
against the state-of-the-art methods quantitatively and qualitatively.
Related papers
- TOPIQ: A Top-down Approach from Semantics to Distortions for Image
Quality Assessment [53.72721476803585]
Image Quality Assessment (IQA) is a fundamental task in computer vision that has witnessed remarkable progress with deep neural networks.
We propose a top-down approach that uses high-level semantics to guide the IQA network to focus on semantically important local distortion regions.
A key component of our approach is the proposed cross-scale attention mechanism, which calculates attention maps for lower level features.
arXiv Detail & Related papers (2023-08-06T09:08:37Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - A Singular Value Perspective on Model Robustness [14.591622269748974]
We show that naturally trained and adversarially robust CNNs exploit highly different features for the same dataset.
We propose Rank Integrated Gradients (RIG), the first rank-based feature attribution method to understand the dependence of CNNs on image rank.
arXiv Detail & Related papers (2020-12-07T08:09:07Z) - Deep Interleaved Network for Image Super-Resolution With Asymmetric
Co-Attention [11.654141322782074]
We propose a deep interleaved network (DIN) to learn how information at different states should be combined for image SR.
Our DIN follows a multi-branch pattern allowing multiple interconnected branches to interleave and fuse at different states.
Besides, the asymmetric co-attention (AsyCA) is proposed and attacked to the interleaved nodes to adaptively emphasize informative features from different states.
arXiv Detail & Related papers (2020-04-24T15:49:18Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.