Super-Resolution Neural Operator
- URL: http://arxiv.org/abs/2303.02584v1
- Date: Sun, 5 Mar 2023 06:17:43 GMT
- Title: Super-Resolution Neural Operator
- Authors: Min Wei, Xuesong Zhang
- Abstract summary: We propose a framework that can resolve high-resolution (HR) images at arbitrary scales from the low-resolution (LR) counterparts.
Treating the LR-HR image pairs as continuous functions approximated with different grid sizes, SRNO learns the mapping between the corresponding function spaces.
Experiments show that SRNO outperforms existing continuous SR methods in terms of both accuracy and running time.
- Score: 5.018040244860608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose Super-resolution Neural Operator (SRNO), a deep operator learning
framework that can resolve high-resolution (HR) images at arbitrary scales from
the low-resolution (LR) counterparts. Treating the LR-HR image pairs as
continuous functions approximated with different grid sizes, SRNO learns the
mapping between the corresponding function spaces. From the perspective of
approximation theory, SRNO first embeds the LR input into a higher-dimensional
latent representation space, trying to capture sufficient basis functions, and
then iteratively approximates the implicit image function with a kernel
integral mechanism, followed by a final dimensionality reduction step to
generate the RGB representation at the target coordinates. The key
characteristics distinguishing SRNO from prior continuous SR works are: 1) the
kernel integral in each layer is efficiently implemented via the Galerkin-type
attention, which possesses non-local properties in the spatial domain and
therefore benefits the grid-free continuum; and 2) the multilayer attention
architecture allows for the dynamic latent basis update, which is crucial for
SR problems to "hallucinate" high-frequency information from the LR image.
Experiments show that SRNO outperforms existing continuous SR methods in terms
of both accuracy and running time. Our code is at
https://github.com/2y7c3/Super-Resolution-Neural-Operator
Related papers
- Latent Diffusion, Implicit Amplification: Efficient Continuous-Scale Super-Resolution for Remote Sensing Images [7.920423405957888]
E$2$DiffSR achieves superior objective metrics and visual quality compared to the state-of-the-art SR methods.
It reduces the inference time of diffusion-based SR methods to a level comparable to that of non-diffusion methods.
arXiv Detail & Related papers (2024-10-30T09:14:13Z) - CiaoSR: Continuous Implicit Attention-in-Attention Network for
Arbitrary-Scale Image Super-Resolution [158.2282163651066]
This paper proposes a continuous implicit attention-in-attention network, called CiaoSR.
We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features.
We embed a scale-aware attention in this implicit attention network to exploit additional non-local information.
arXiv Detail & Related papers (2022-12-08T15:57:46Z) - Learning Detail-Structure Alternative Optimization for Blind
Super-Resolution [69.11604249813304]
We propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR.
In our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures.
Our method achieves the state-of-the-art against existing methods.
arXiv Detail & Related papers (2022-12-03T14:44:17Z) - SRDiff: Single Image Super-Resolution with Diffusion Probabilistic
Models [19.17571465274627]
Single image super-resolution (SISR) aims to reconstruct high-resolution (HR) images from the given low-resolution (LR) ones.
We propose a novel single image super-resolution diffusion probabilistic model (SRDiff)
SRDiff is optimized with a variant of the variational bound on the data likelihood and can provide diverse and realistic SR predictions.
arXiv Detail & Related papers (2021-04-30T12:31:25Z) - UltraSR: Spatial Encoding is a Missing Key for Implicit Image
Function-based Arbitrary-Scale Super-Resolution [74.82282301089994]
In this work, we propose UltraSR, a simple yet effective new network design based on implicit image functions.
We show that spatial encoding is indeed a missing key towards the next-stage high-accuracy implicit image function.
Our UltraSR sets new state-of-the-art performance on the DIV2K benchmark under all super-resolution scales.
arXiv Detail & Related papers (2021-03-23T17:36:42Z) - Real Image Super Resolution Via Heterogeneous Model Ensemble using
GP-NAS [63.48801313087118]
We propose a new method for image superresolution using deep residual network with dense skip connections.
The proposed method won the first place in all three tracks of the AIM 2020 Real Image Super-Resolution Challenge.
arXiv Detail & Related papers (2020-09-02T22:33:23Z) - Lightweight image super-resolution with enhanced CNN [82.36883027158308]
Deep convolutional neural networks (CNNs) with strong expressive ability have achieved impressive performances on single image super-resolution (SISR)
We propose a lightweight enhanced SR CNN (LESRCNN) with three successive sub-blocks, an information extraction and enhancement block (IEEB), a reconstruction block (RB) and an information refinement block (IRB)
IEEB extracts hierarchical low-resolution (LR) features and aggregates the obtained features step-by-step to increase the memory ability of the shallow layers on deep layers for SISR.
RB converts low-frequency features into high-frequency features by fusing global
arXiv Detail & Related papers (2020-07-08T18:03:40Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - AdaptiveWeighted Attention Network with Camera Spectral Sensitivity
Prior for Spectral Reconstruction from RGB Images [22.26917280683572]
We propose a novel adaptive weighted attention network (AWAN) for spectral reconstruction.
AWCA and PSNL modules are developed to reallocate channel-wise feature responses.
In the NTIRE 2020 Spectral Reconstruction Challenge, our entries obtain the 1st ranking on the Clean track and the 3rd place on the Real World track.
arXiv Detail & Related papers (2020-05-19T09:21:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.