COLA-Net: Collaborative Attention Network for Image Restoration
- URL: http://arxiv.org/abs/2103.05961v1
- Date: Wed, 10 Mar 2021 09:33:17 GMT
- Title: COLA-Net: Collaborative Attention Network for Image Restoration
- Authors: Chong Mou, Jian Zhang, Xiaopeng Fan, Hangfan Liu, Ronggang Wang
- Abstract summary: We propose a novel collaborative attention network (COLA-Net) for image restoration.
Our proposed COLA-Net is able to achieve state-of-the-art performance in both peak signal-to-noise ratio and visual perception.
- Score: 27.965025010397603
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Local and non-local attention-based methods have been well studied in various
image restoration tasks while leading to promising performance. However, most
of the existing methods solely focus on one type of attention mechanism (local
or non-local). Furthermore, by exploiting the self-similarity of natural
images, existing pixel-wise non-local attention operations tend to give rise to
deviations in the process of characterizing long-range dependence due to image
degeneration. To overcome these problems, in this paper we propose a novel
collaborative attention network (COLA-Net) for image restoration, as the first
attempt to combine local and non-local attention mechanisms to restore image
content in the areas with complex textures and with highly repetitive details
respectively. In addition, an effective and robust patch-wise non-local
attention model is developed to capture long-range feature correspondences
through 3D patches. Extensive experiments on synthetic image denoising, real
image denoising and compression artifact reduction tasks demonstrate that our
proposed COLA-Net is able to achieve state-of-the-art performance in both peak
signal-to-noise ratio and visual perception, while maintaining an attractive
computational complexity. The source code is available on
https://github.com/MC-E/COLA-Net.
Related papers
- Attention Overlap Is Responsible for The Entity Missing Problem in Text-to-image Diffusion Models! [3.355491272942994]
This study examines three potential causes of the entity-missing problem, focusing on cross-attention dynamics.
We found that reducing overlap in attention maps between entities can effectively minimize the rate of entity missing.
arXiv Detail & Related papers (2024-10-28T12:43:48Z) - Empowering Image Recovery_ A Multi-Attention Approach [96.25892659985342]
Diverse Restormer (DART) is an image restoration method that integrates information from various sources to address restoration challenges.
DART employs customized attention mechanisms to enhance overall performance.
evaluation across five restoration tasks consistently positions DART at the forefront.
arXiv Detail & Related papers (2024-04-06T12:50:08Z) - CiaoSR: Continuous Implicit Attention-in-Attention Network for
Arbitrary-Scale Image Super-Resolution [158.2282163651066]
This paper proposes a continuous implicit attention-in-attention network, called CiaoSR.
We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features.
We embed a scale-aware attention in this implicit attention network to exploit additional non-local information.
arXiv Detail & Related papers (2022-12-08T15:57:46Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Attention-based Image Upsampling [14.676228848773157]
We show how attention mechanisms can be used to replace another canonical operation: strided transposed convolution.
We show that attention-based upsampling consistently outperforms traditional upsampling methods.
arXiv Detail & Related papers (2020-12-17T19:58:10Z) - Image Super-Resolution with Cross-Scale Non-Local Attention and
Exhaustive Self-Exemplars Mining [66.82470461139376]
We propose the first Cross-Scale Non-Local (CS-NL) attention module with integration into a recurrent neural network.
By combining the new CS-NL prior with local and in-scale non-local priors in a powerful recurrent fusion cell, we can find more cross-scale feature correlations within a single low-resolution image.
arXiv Detail & Related papers (2020-06-02T07:08:58Z) - Reconstructing the Noise Manifold for Image Denoising [56.562855317536396]
We introduce the idea of a cGAN which explicitly leverages structure in the image noise space.
By learning directly a low dimensional manifold of the image noise, the generator promotes the removal from the noisy image only that information which spans this manifold.
Based on our experiments, our model substantially outperforms existing state-of-the-art architectures.
arXiv Detail & Related papers (2020-02-11T00:31:31Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.