Cross-SRN: Structure-Preserving Super-Resolution Network with Cross
Convolution
- URL: http://arxiv.org/abs/2201.01458v2
- Date: Fri, 7 Jan 2022 09:40:00 GMT
- Title: Cross-SRN: Structure-Preserving Super-Resolution Network with Cross
Convolution
- Authors: Yuqing Liu, Qi Jia, Xin Fan, Shanshe Wang, Siwei Ma, Wen Gao
- Abstract summary: It is challenging to restore low-resolution (LR) images to super-resolution (SR) images with correct and clear details.
Existing deep learning works almost neglect the inherent structural information of images.
We design a hierarchical feature exploitation network to probe and preserve structural information.
- Score: 64.76159006851151
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is challenging to restore low-resolution (LR) images to super-resolution
(SR) images with correct and clear details. Existing deep learning works almost
neglect the inherent structural information of images, which acts as an
important role for visual perception of SR results. In this paper, we design a
hierarchical feature exploitation network to probe and preserve structural
information in a multi-scale feature fusion manner. First, we propose a cross
convolution upon traditional edge detectors to localize and represent edge
features. Then, cross convolution blocks (CCBs) are designed with feature
normalization and channel attention to consider the inherent correlations of
features. Finally, we leverage multi-scale feature fusion group (MFFG) to embed
the cross convolution blocks and develop the relations of structural features
in different scales hierarchically, invoking a lightweight structure-preserving
network named as Cross-SRN. Experimental results demonstrate the Cross-SRN
achieves competitive or superior restoration performances against the
state-of-the-art methods with accurate and clear structural details. Moreover,
we set a criterion to select images with rich structural textures. The proposed
Cross-SRN outperforms the state-of-the-art methods on the selected benchmark,
which demonstrates that our network has a significant advantage in preserving
edges.
Related papers
- Hybrid Feature Collaborative Reconstruction Network for Few-Shot Fine-Grained Image Classification [6.090855292102877]
We design a new Hybrid Feature Collaborative Reconstruction Network (HFCR-Net) for few-shot fine-grained image classification.
We fuse the channel features and the spatial features to increase the inter-class differences.
Our experiments on three widely used fine-grained datasets demonstrate the effectiveness and superiority of our approach.
arXiv Detail & Related papers (2024-07-02T10:14:00Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Cross-Spatial Pixel Integration and Cross-Stage Feature Fusion Based
Transformer Network for Remote Sensing Image Super-Resolution [13.894645293832044]
Transformer-based models have shown competitive performance in remote sensing image super-resolution (RSISR)
We propose a novel transformer architecture called Cross-Spatial Pixel Integration and Cross-Stage Feature Fusion Based Transformer Network (SPIFFNet) for RSISR.
Our proposed model effectively enhances global cognition and understanding of the entire image, facilitating efficient integration of features cross-stages.
arXiv Detail & Related papers (2023-07-06T13:19:06Z) - Cross-View Hierarchy Network for Stereo Image Super-Resolution [14.574538513341277]
Stereo image super-resolution aims to improve the quality of high-resolution stereo image pairs by exploiting complementary information across views.
We propose a novel method, named Cross-View-Hierarchy Network for Stereo Image Super-Resolution (CVHSSR)
CVHSSR achieves the best stereo image super-resolution performance than other state-of-the-art methods while using fewer parameters.
arXiv Detail & Related papers (2023-04-13T03:11:30Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Structure-Preserving Image Super-Resolution [94.16949589128296]
Structures matter in single image super-resolution (SISR)
Recent studies have promoted the development of SISR by recovering photo-realistic images.
However, there are still undesired structural distortions in the recovered images.
arXiv Detail & Related papers (2021-09-26T08:48:27Z) - Hierarchical Deep CNN Feature Set-Based Representation Learning for
Robust Cross-Resolution Face Recognition [59.29808528182607]
Cross-resolution face recognition (CRFR) is important in intelligent surveillance and biometric forensics.
Existing shallow learning-based and deep learning-based methods focus on mapping the HR-LR face pairs into a joint feature space.
In this study, we desire to fully exploit the multi-level deep convolutional neural network (CNN) feature set for robust CRFR.
arXiv Detail & Related papers (2021-03-25T14:03:42Z) - Sequential Hierarchical Learning with Distribution Transformation for
Image Super-Resolution [83.70890515772456]
We build a sequential hierarchical learning super-resolution network (SHSR) for effective image SR.
We consider the inter-scale correlations of features, and devise a sequential multi-scale block (SMB) to progressively explore the hierarchical information.
Experiment results show SHSR achieves superior quantitative performance and visual quality to state-of-the-art methods.
arXiv Detail & Related papers (2020-07-19T01:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.