Fast Adaptation to Super-Resolution Networks via Meta-Learning
- URL: http://arxiv.org/abs/2001.02905v3
- Date: Tue, 25 Aug 2020 09:24:50 GMT
- Title: Fast Adaptation to Super-Resolution Networks via Meta-Learning
- Authors: Seobin Park, Jinsu Yoo, Donghyeon Cho, Jiwon Kim and Tae Hyun Kim
- Abstract summary: In this work, we observe the opportunity for further improvement of the performance of SISR without changing the architecture of conventional SR networks.
In the training stage, we train the network via meta-learning; thus, the network can quickly adapt to any input image at test time.
We demonstrate that the proposed model-agnostic approach consistently improves the performance of conventional SR networks on various benchmark SR datasets.
- Score: 24.637337634643885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional supervised super-resolution (SR) approaches are trained with
massive external SR datasets but fail to exploit desirable properties of the
given test image. On the other hand, self-supervised SR approaches utilize the
internal information within a test image but suffer from computational
complexity in run-time. In this work, we observe the opportunity for further
improvement of the performance of SISR without changing the architecture of
conventional SR networks by practically exploiting additional information given
from the input image. In the training stage, we train the network via
meta-learning; thus, the network can quickly adapt to any input image at test
time. Then, in the test stage, parameters of this meta-learned network are
rapidly fine-tuned with only a few iterations by only using the given
low-resolution image. The adaptation at the test time takes full advantage of
patch-recurrence property observed in natural images. Our method effectively
handles unknown SR kernels and can be applied to any existing model. We
demonstrate that the proposed model-agnostic approach consistently improves the
performance of conventional SR networks on various benchmark SR datasets.
Related papers
- Rethinking Image Super-Resolution from Training Data Perspectives [54.28824316574355]
We investigate the understudied effect of the training data used for image super-resolution (SR)
With this, we propose an automated image evaluation pipeline.
We find that datasets with (i) low compression artifacts, (ii) high within-image diversity as judged by the number of different objects, and (iii) a large number of images from ImageNet or PASS all positively affect SR performance.
arXiv Detail & Related papers (2024-09-01T16:25:04Z) - Efficient Test-Time Adaptation for Super-Resolution with Second-Order
Degradation and Reconstruction [62.955327005837475]
Image super-resolution (SR) aims to learn a mapping from low-resolution (LR) to high-resolution (HR) using paired HR-LR training images.
We present an efficient test-time adaptation framework for SR, named SRTTA, which is able to quickly adapt SR models to test domains with different/unknown degradation types.
arXiv Detail & Related papers (2023-10-29T13:58:57Z) - ICF-SRSR: Invertible scale-Conditional Function for Self-Supervised
Real-world Single Image Super-Resolution [60.90817228730133]
Single image super-resolution (SISR) is a challenging problem that aims to up-sample a given low-resolution (LR) image to a high-resolution (HR) counterpart.
Recent approaches are trained on simulated LR images degraded by simplified down-sampling operators.
We propose a novel Invertible scale-Conditional Function (ICF) which can scale an input image and then restore the original input with different scale conditions.
arXiv Detail & Related papers (2023-07-24T12:42:45Z) - CiaoSR: Continuous Implicit Attention-in-Attention Network for
Arbitrary-Scale Image Super-Resolution [158.2282163651066]
This paper proposes a continuous implicit attention-in-attention network, called CiaoSR.
We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features.
We embed a scale-aware attention in this implicit attention network to exploit additional non-local information.
arXiv Detail & Related papers (2022-12-08T15:57:46Z) - Learning Detail-Structure Alternative Optimization for Blind
Super-Resolution [69.11604249813304]
We propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR.
In our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures.
Our method achieves the state-of-the-art against existing methods.
arXiv Detail & Related papers (2022-12-03T14:44:17Z) - Test-Time Adaptation for Super-Resolution: You Only Need to Overfit on a
Few More Images [12.846479438896338]
We propose a simple yet universal approach to improve the perceptual quality of the HR prediction from a pre-trained SR network.
We show the effects of fine-tuning on images in terms of the perceptual quality and PSNR/SSIM values.
arXiv Detail & Related papers (2021-04-06T16:50:52Z) - Self-Supervised Adaptation for Video Super-Resolution [7.26562478548988]
Single-image super-resolution (SISR) networks can adapt their network parameters to specific input images.
We present a new learning algorithm that allows conventional video super-resolution (VSR) networks to adapt their parameters to test video frames.
arXiv Detail & Related papers (2021-03-18T08:30:24Z) - Deep Iterative Residual Convolutional Network for Single Image
Super-Resolution [31.934084942626257]
We propose a deep Iterative Super-Resolution Residual Convolutional Network (ISRResCNet)
It exploits the powerful image regularization and large-scale optimization techniques by training the deep network in an iterative manner with a residual learning approach.
Our method with a few trainable parameters improves the results for different scaling factors in comparison with the state-of-art methods.
arXiv Detail & Related papers (2020-09-07T12:54:14Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z) - Unpaired Image Super-Resolution using Pseudo-Supervision [12.18340575383456]
We propose an unpaired image super-resolution (SR) method using a generative adversarial network.
Our network consists of an unpaired kernel/noise correction network and a pseudo-paired SR network.
Experiments on diverse datasets show that the proposed method is superior to existing solutions to the unpaired SR problem.
arXiv Detail & Related papers (2020-02-26T10:30:52Z) - Self-supervised Fine-tuning for Correcting Super-Resolution
Convolutional Neural Networks [17.922507191213494]
We show that one can avoid training and correct for SR results with a fully self-supervised fine-tuning approach.
We apply our fine-tuning algorithm on multiple image and video SR CNNs and show that it can successfully correct for a sub-optimal SR solution.
arXiv Detail & Related papers (2019-12-30T11:02:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.