CiaoSR: Continuous Implicit Attention-in-Attention Network for
Arbitrary-Scale Image Super-Resolution
- URL: http://arxiv.org/abs/2212.04362v3
- Date: Thu, 13 Apr 2023 07:50:41 GMT
- Title: CiaoSR: Continuous Implicit Attention-in-Attention Network for
Arbitrary-Scale Image Super-Resolution
- Authors: Jiezhang Cao, Qin Wang, Yongqin Xian, Yawei Li, Bingbing Ni, Zhiming
Pi, Kai Zhang, Yulun Zhang, Radu Timofte, Luc Van Gool
- Abstract summary: This paper proposes a continuous implicit attention-in-attention network, called CiaoSR.
We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features.
We embed a scale-aware attention in this implicit attention network to exploit additional non-local information.
- Score: 158.2282163651066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning continuous image representations is recently gaining popularity for
image super-resolution (SR) because of its ability to reconstruct
high-resolution images with arbitrary scales from low-resolution inputs.
Existing methods mostly ensemble nearby features to predict the new pixel at
any queried coordinate in the SR image. Such a local ensemble suffers from some
limitations: i) it has no learnable parameters and it neglects the similarity
of the visual features; ii) it has a limited receptive field and cannot
ensemble relevant features in a large field which are important in an image. To
address these issues, this paper proposes a continuous implicit
attention-in-attention network, called CiaoSR. We explicitly design an implicit
attention network to learn the ensemble weights for the nearby local features.
Furthermore, we embed a scale-aware attention in this implicit attention
network to exploit additional non-local information. Extensive experiments on
benchmark datasets demonstrate CiaoSR significantly outperforms the existing
single image SR methods with the same backbone. In addition, CiaoSR also
achieves the state-of-the-art performance on the arbitrary-scale SR task. The
effectiveness of the method is also demonstrated on the real-world SR setting.
More importantly, CiaoSR can be flexibly integrated into any backbone to
improve the SR performance.
Related papers
- UnmixingSR: Material-aware Network with Unsupervised Unmixing as Auxiliary Task for Hyperspectral Image Super-resolution [5.167168688234238]
This paper proposes a component-aware hyperspectral image (HIS) super-resolution network called UnmixingSR.
We use the bond between LR abundances and HR abundances to boost the stability of our method in solving SR problems.
Experimental results show that unmixing process as an auxiliary task incorporated into the SR problem is feasible and rational.
arXiv Detail & Related papers (2024-07-09T03:41:02Z) - AnySR: Realizing Image Super-Resolution as Any-Scale, Any-Resource [84.74855803555677]
We introduce AnySR, to rebuild existing arbitrary-scale SR methods into any-scale, any-resource implementation.
Our AnySR innovates in: 1) building arbitrary-scale tasks as any-resource implementation, reducing resource requirements for smaller scales without additional parameters; 2) enhancing any-scale performance in a feature-interweaving fashion.
Results show that our AnySR implements SISR tasks in a computing-more-efficient fashion, and performs on par with existing arbitrary-scale SISR methods.
arXiv Detail & Related papers (2024-07-05T04:00:14Z) - Beyond Image Super-Resolution for Image Recognition with Task-Driven Perceptual Loss [47.36902705025445]
Super-Resolution for Image Recognition (SR4IR) guides the generation of SR images beneficial to image recognition performance.
In this paper, we demonstrate that our SR4IR achieves outstanding task performance by generating SR images useful for a specific image recognition task.
arXiv Detail & Related papers (2024-04-02T06:52:31Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - Super-Resolution Neural Operator [5.018040244860608]
We propose a framework that can resolve high-resolution (HR) images at arbitrary scales from the low-resolution (LR) counterparts.
Treating the LR-HR image pairs as continuous functions approximated with different grid sizes, SRNO learns the mapping between the corresponding function spaces.
Experiments show that SRNO outperforms existing continuous SR methods in terms of both accuracy and running time.
arXiv Detail & Related papers (2023-03-05T06:17:43Z) - Lightweight Stepless Super-Resolution of Remote Sensing Images via
Saliency-Aware Dynamic Routing Strategy [15.587621728422414]
Deep learning algorithms have greatly improved the performance of remote sensing image (RSI) super-resolution (SR)
However, increasing network depth and parameters cause a huge burden of computing and storage.
We propose a saliency-aware dynamic routing network (SalDRN) for lightweight and stepless SR of RSIs.
arXiv Detail & Related papers (2022-10-14T07:49:03Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Lightweight image super-resolution with enhanced CNN [82.36883027158308]
Deep convolutional neural networks (CNNs) with strong expressive ability have achieved impressive performances on single image super-resolution (SISR)
We propose a lightweight enhanced SR CNN (LESRCNN) with three successive sub-blocks, an information extraction and enhancement block (IEEB), a reconstruction block (RB) and an information refinement block (IRB)
IEEB extracts hierarchical low-resolution (LR) features and aggregates the obtained features step-by-step to increase the memory ability of the shallow layers on deep layers for SISR.
RB converts low-frequency features into high-frequency features by fusing global
arXiv Detail & Related papers (2020-07-08T18:03:40Z) - Image Super-Resolution with Cross-Scale Non-Local Attention and
Exhaustive Self-Exemplars Mining [66.82470461139376]
We propose the first Cross-Scale Non-Local (CS-NL) attention module with integration into a recurrent neural network.
By combining the new CS-NL prior with local and in-scale non-local priors in a powerful recurrent fusion cell, we can find more cross-scale feature correlations within a single low-resolution image.
arXiv Detail & Related papers (2020-06-02T07:08:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.