Self-supervised Fine-tuning for Correcting Super-Resolution
Convolutional Neural Networks
- URL: http://arxiv.org/abs/1912.12879v3
- Date: Mon, 15 Jun 2020 12:11:14 GMT
- Title: Self-supervised Fine-tuning for Correcting Super-Resolution
Convolutional Neural Networks
- Authors: Alice Lucas, Santiago Lopez-Tapia, Rafael Molina and Aggelos K.
Katsaggelos
- Abstract summary: We show that one can avoid training and correct for SR results with a fully self-supervised fine-tuning approach.
We apply our fine-tuning algorithm on multiple image and video SR CNNs and show that it can successfully correct for a sub-optimal SR solution.
- Score: 17.922507191213494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Convolutional Neural Networks (CNNs) trained for image and video
super-resolution (SR) regularly achieve new state-of-the-art performance, they
also suffer from significant drawbacks. One of their limitations is their lack
of robustness to unseen image formation models during training. Other
limitations include the generation of artifacts and hallucinated content when
training Generative Adversarial Networks (GANs) for SR. While the Deep Learning
literature focuses on presenting new training schemes and settings to resolve
these various issues, we show that one can avoid training and correct for SR
results with a fully self-supervised fine-tuning approach. More specifically,
at test time, given an image and its known image formation model, we fine-tune
the parameters of the trained network and iteratively update them using a data
fidelity loss. We apply our fine-tuning algorithm on multiple image and video
SR CNNs and show that it can successfully correct for a sub-optimal SR solution
by entirely relying on internal learning at test time. We apply our method on
the problem of fine-tuning for unseen image formation models and on removal of
artifacts introduced by GANs.
Related papers
- Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Self-Denoising Neural Networks for Few Shot Learning [66.38505903102373]
We present a new training scheme that adds noise at multiple stages of an existing neural architecture while simultaneously learning to be robust to this added noise.
This architecture, which we call a Self-Denoising Neural Network (SDNN), can be applied easily to most modern convolutional neural architectures.
arXiv Detail & Related papers (2021-10-26T03:28:36Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Deep Unrolled Network for Video Super-Resolution [0.45880283710344055]
Video super-resolution (VSR) aims to reconstruct a sequence of high-resolution (HR) images from their corresponding low-resolution (LR) versions.
Traditionally, solving a VSR problem has been based on iterative algorithms that exploit prior knowledge on image formation and assumptions on the motion.
Deep learning (DL) algorithms can efficiently learn spatial patterns from large collections of images.
We propose a new VSR neural network based on unrolled optimization techniques and discuss its performance.
arXiv Detail & Related papers (2021-02-23T14:35:09Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - Accelerated MRI with Un-trained Neural Networks [29.346778609548995]
We address the reconstruction problem arising in accelerated MRI with un-trained neural networks.
We propose a highly optimized un-trained recovery approach based on a variation of the Deep Decoder.
We find that our un-trained algorithm achieves similar performance to a baseline trained neural network, but a state-of-the-art trained network outperforms the un-trained one.
arXiv Detail & Related papers (2020-07-06T00:01:25Z) - Auto-Rectify Network for Unsupervised Indoor Depth Estimation [119.82412041164372]
We establish that the complex ego-motions exhibited in handheld settings are a critical obstacle for learning depth.
We propose a data pre-processing method that rectifies training images by removing their relative rotations for effective learning.
Our results outperform the previous unsupervised SOTA method by a large margin on the challenging NYUv2 dataset.
arXiv Detail & Related papers (2020-06-04T08:59:17Z) - Adaptive Loss Function for Super Resolution Neural Networks Using Convex
Optimization Techniques [24.582559317893274]
Single Image Super-Resolution (SISR) task refers to learn a mapping from low-resolution images to the corresponding high-resolution ones.
CNNs are encouraged to learn high-frequency components of the images as well as low-frequency components.
We have shown that the proposed method can recover fine details of the images and it is stable in the training process.
arXiv Detail & Related papers (2020-01-21T20:31:10Z) - Fast Adaptation to Super-Resolution Networks via Meta-Learning [24.637337634643885]
In this work, we observe the opportunity for further improvement of the performance of SISR without changing the architecture of conventional SR networks.
In the training stage, we train the network via meta-learning; thus, the network can quickly adapt to any input image at test time.
We demonstrate that the proposed model-agnostic approach consistently improves the performance of conventional SR networks on various benchmark SR datasets.
arXiv Detail & Related papers (2020-01-09T09:59:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.