Accurate Image Restoration with Attention Retractable Transformer
- URL: http://arxiv.org/abs/2210.01427v1
- Date: Tue, 4 Oct 2022 07:35:01 GMT
- Title: Accurate Image Restoration with Attention Retractable Transformer
- Authors: Jiale Zhang and Yulun Zhang and Jinjin Gu and Yongbing Zhang and
Linghe Kong and Xin Yuan
- Abstract summary: We propose Attention Retractable Transformer (ART) for image restoration.
ART presents both dense and sparse attention modules in the network.
We conduct extensive experiments on image super-resolution, denoising, and JPEG compression artifact reduction tasks.
- Score: 50.05204240159985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Transformer-based image restoration networks have achieved
promising improvements over convolutional neural networks due to
parameter-independent global interactions. To lower computational cost,
existing works generally limit self-attention computation within
non-overlapping windows. However, each group of tokens are always from a dense
area of the image. This is considered as a dense attention strategy since the
interactions of tokens are restrained in dense regions. Obviously, this
strategy could result in restricted receptive fields. To address this issue, we
propose Attention Retractable Transformer (ART) for image restoration, which
presents both dense and sparse attention modules in the network. The sparse
attention module allows tokens from sparse areas to interact and thus provides
a wider receptive field. Furthermore, the alternating application of dense and
sparse attention modules greatly enhances representation ability of Transformer
while providing retractable attention on the input image.We conduct extensive
experiments on image super-resolution, denoising, and JPEG compression artifact
reduction tasks. Experimental results validate that our proposed ART
outperforms state-of-the-art methods on various benchmark datasets both
quantitatively and visually. We also provide code and models at the website
https://github.com/gladzhang/ART.
Related papers
- Look-Around Before You Leap: High-Frequency Injected Transformer for Image Restoration [46.96362010335177]
In this paper, we propose HIT, a simple yet effective High-frequency Injected Transformer for image restoration.
Specifically, we design a window-wise injection module (WIM), which incorporates abundant high-frequency details into the feature map, to provide reliable references for restoring high-quality images.
In addition, we introduce a spatial enhancement unit (SEU) to preserve essential spatial relationships that may be lost due to the computations carried out across channel dimensions in the BIM.
arXiv Detail & Related papers (2024-03-30T08:05:00Z) - HAT: Hybrid Attention Transformer for Image Restoration [61.74223315807691]
Transformer-based methods have shown impressive performance in image restoration tasks, such as image super-resolution and denoising.
We propose a new Hybrid Attention Transformer (HAT) to activate more input pixels for better restoration.
Our HAT achieves state-of-the-art performance both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-09-11T05:17:55Z) - T-former: An Efficient Transformer for Image Inpainting [50.43302925662507]
A class of attention-based network architectures, called transformer, has shown significant performance on natural language processing fields.
In this paper, we design a novel attention linearly related to the resolution according to Taylor expansion, and based on this attention, a network called $T$-former is designed for image inpainting.
Experiments on several benchmark datasets demonstrate that our proposed method achieves state-of-the-art accuracy while maintaining a relatively low number of parameters and computational complexity.
arXiv Detail & Related papers (2023-05-12T04:10:42Z) - Transformer Compressed Sensing via Global Image Tokens [4.722333456749269]
We propose a novel image decomposition that naturally embeds images into low-resolution inputs.
We replace CNN components in a well-known CS-MRI neural network with TNN blocks and demonstrate the improvements afforded by KD.
arXiv Detail & Related papers (2022-03-24T05:56:30Z) - Restormer: Efficient Transformer for High-Resolution Image Restoration [118.9617735769827]
convolutional neural networks (CNNs) perform well at learning generalizable image priors from large-scale data.
Transformers have shown significant performance gains on natural language and high-level vision tasks.
Our model, named Restoration Transformer (Restormer), achieves state-of-the-art results on several image restoration tasks.
arXiv Detail & Related papers (2021-11-18T18:59:10Z) - CAT: Cross Attention in Vision Transformer [39.862909079452294]
We propose a new attention mechanism in Transformer called Cross Attention.
It alternates attention inner the image patch instead of the whole image to capture local information.
We build a hierarchical network called Cross Attention Transformer(CAT) for other vision tasks.
arXiv Detail & Related papers (2021-06-10T14:38:32Z) - Less is More: Pay Less Attention in Vision Transformers [61.05787583247392]
Less attention vIsion Transformer builds upon the fact that convolutions, fully-connected layers, and self-attentions have almost equivalent mathematical expressions for processing image patch sequences.
The proposed LIT achieves promising performance on image recognition tasks, including image classification, object detection and instance segmentation.
arXiv Detail & Related papers (2021-05-29T05:26:07Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.