Content-adaptive Representation Learning for Fast Image Super-resolution
- URL: http://arxiv.org/abs/2105.09645v1
- Date: Thu, 20 May 2021 10:24:29 GMT
- Title: Content-adaptive Representation Learning for Fast Image Super-resolution
- Authors: Yukai Shi, Jinghui Qin
- Abstract summary: We adrress the efficiency issue in image SR by incorporating a patch-wise rolling network to content-adaptively recover images according to difficulty levels.
In contrast to existing studies that ignore difficulty diversity, we adopt different stage of a neural network to perform image restoration.
Our model not only shows a significant acceleration but also maintain state-of-the-art performance.
- Score: 6.5468866820512215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional networks have attracted great attention in image
restoration and enhancement. Generally, restoration quality has been improved
by building more and more convolutional block. However, these methods mostly
learn a specific model to handle all images and ignore difficulty diversity. In
other words, an area in the image with high frequency tend to lose more
information during compressing while an area with low frequency tends to lose
less. In this article, we adrress the efficiency issue in image SR by
incorporating a patch-wise rolling network(PRN) to content-adaptively recover
images according to difficulty levels. In contrast to existing studies that
ignore difficulty diversity, we adopt different stage of a neural network to
perform image restoration. In addition, we propose a rolling strategy that
utilizes the parameters of each stage more flexible. Extensive experiments
demonstrate that our model not only shows a significant acceleration but also
maintain state-of-the-art performance.
Related papers
- Multi-Scale Representation Learning for Image Restoration with State-Space Model [13.622411683295686]
We propose a novel Multi-Scale State-Space Model-based (MS-Mamba) for efficient image restoration.
Our proposed method achieves new state-of-the-art performance while maintaining low computational complexity.
arXiv Detail & Related papers (2024-08-19T16:42:58Z) - Dilated Strip Attention Network for Image Restoration [5.65781374269726]
We propose a dilated strip attention network (DSAN) for image restoration.
By employing the DSA operation horizontally and vertically, each location can harvest the contextual information from a much wider region.
Our experiments show that our DSAN outperforms state-of-the-art algorithms on several image restoration tasks.
arXiv Detail & Related papers (2024-07-26T09:12:30Z) - Bracketing Image Restoration and Enhancement with High-Low Frequency Decomposition [44.80645807358389]
HLNet is a Bracketing Image Restoration and Enhancement method based on high-low frequency decomposition.
We employ two modules for feature extraction: shared weight modules and non-shared weight modules.
In the non-shared weight modules, we introduce the High-Low Frequency Decomposition Block (HLFDB), which employs different methods to handle high-low frequency information.
arXiv Detail & Related papers (2024-04-21T05:11:37Z) - Unified-Width Adaptive Dynamic Network for All-In-One Image Restoration [50.81374327480445]
We introduce a novel concept positing that intricate image degradation can be represented in terms of elementary degradation.
We propose the Unified-Width Adaptive Dynamic Network (U-WADN), consisting of two pivotal components: a Width Adaptive Backbone (WAB) and a Width Selector (WS)
The proposed U-WADN achieves better performance while simultaneously reducing up to 32.3% of FLOPs and providing approximately 15.7% real-time acceleration.
arXiv Detail & Related papers (2024-01-24T04:25:12Z) - HAT: Hybrid Attention Transformer for Image Restoration [61.74223315807691]
Transformer-based methods have shown impressive performance in image restoration tasks, such as image super-resolution and denoising.
We propose a new Hybrid Attention Transformer (HAT) to activate more input pixels for better restoration.
Our HAT achieves state-of-the-art performance both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-09-11T05:17:55Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - Wide & deep learning for spatial & intensity adaptive image restoration [16.340992967330603]
We propose an ingenious and efficient multi-frame image restoration network (DparNet) with wide & deep architecture.
The degradation prior is directly learned from degraded images in form of key degradation parameter matrix.
The wide & deep architecture in DparNet enables the learned parameters to directly modulate the final restoring results.
arXiv Detail & Related papers (2023-05-30T03:24:09Z) - ClassPruning: Speed Up Image Restoration Networks by Dynamic N:M Pruning [25.371802581339576]
ClassPruning can help existing methods save approximately 40% FLOPs while maintaining performance.
We propose a novel training strategy along with two additional loss terms to stabilize training and improve performance.
arXiv Detail & Related papers (2022-11-10T11:14:15Z) - Accurate Image Restoration with Attention Retractable Transformer [50.05204240159985]
We propose Attention Retractable Transformer (ART) for image restoration.
ART presents both dense and sparse attention modules in the network.
We conduct extensive experiments on image super-resolution, denoising, and JPEG compression artifact reduction tasks.
arXiv Detail & Related papers (2022-10-04T07:35:01Z) - Rank-Enhanced Low-Dimensional Convolution Set for Hyperspectral Image
Denoising [50.039949798156826]
This paper tackles the challenging problem of hyperspectral (HS) image denoising.
We propose rank-enhanced low-dimensional convolution set (Re-ConvSet)
We then incorporate Re-ConvSet into the widely-used U-Net architecture to construct an HS image denoising method.
arXiv Detail & Related papers (2022-07-09T13:35:12Z) - Contrastive Learning with Stronger Augmentations [63.42057690741711]
We propose a general framework called Contrastive Learning with Stronger Augmentations(A) to complement current contrastive learning approaches.
Here, the distribution divergence between the weakly and strongly augmented images over the representation bank is adopted to supervise the retrieval of strongly augmented queries.
Experiments showed the information from the strongly augmented images can significantly boost the performance.
arXiv Detail & Related papers (2021-04-15T18:40:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.