Perception-Oriented Single Image Super-Resolution using Optimal
Objective Estimation
- URL: http://arxiv.org/abs/2211.13676v2
- Date: Mon, 28 Nov 2022 11:43:15 GMT
- Title: Perception-Oriented Single Image Super-Resolution using Optimal
Objective Estimation
- Authors: Seung Ho Park, Young Su Moon, Nam Ik Cho
- Abstract summary: We propose a new SISR framework that applies optimal objectives for each region to generate plausible results in overall areas of high-resolution outputs.
The framework comprises two models: a predictive model that infers an optimal objective map for a given low-resolution (LR) input and a generative model that applies a target objective map to produce the corresponding SR output.
- Score: 11.830754741007029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single-image super-resolution (SISR) networks trained with perceptual and
adversarial losses provide high-contrast outputs compared to those of networks
trained with distortion-oriented losses, such as L1 or L2. However, it has been
shown that using a single perceptual loss is insufficient for accurately
restoring locally varying diverse shapes in images, often generating
undesirable artifacts or unnatural details. For this reason, combinations of
various losses, such as perceptual, adversarial, and distortion losses, have
been attempted, yet it remains challenging to find optimal combinations. Hence,
in this paper, we propose a new SISR framework that applies optimal objectives
for each region to generate plausible results in overall areas of
high-resolution outputs. Specifically, the framework comprises two models: a
predictive model that infers an optimal objective map for a given
low-resolution (LR) input and a generative model that applies a target
objective map to produce the corresponding SR output. The generative model is
trained over our proposed objective trajectory representing a set of essential
objectives, which enables the single network to learn various SR results
corresponding to combined losses on the trajectory. The predictive model is
trained using pairs of LR images and corresponding optimal objective maps
searched from the objective trajectory. Experimental results on five benchmarks
show that the proposed method outperforms state-of-the-art perception-driven SR
methods in LPIPS, DISTS, PSNR, and SSIM metrics. The visual results also
demonstrate the superiority of our method in perception-oriented
reconstruction. The code and models are available at
https://github.com/seungho-snu/SROOE.
Related papers
- Perception-Distortion Balanced Super-Resolution: A Multi-Objective Optimization Perspective [16.762410459930006]
High perceptual quality and low distortion degree are important goals in image restoration tasks such as super-resolution (SR)
Current gradient-based methods are hard to balance these objectives due to the opposite gradient directions of the contradictory losses.
In this paper, we formulate the perception-distortion trade-off in SR as a multi-objective optimization problem and develop a new by integrating the gradient-free evolutionary algorithm (EA) with gradient-based Adam.
arXiv Detail & Related papers (2023-12-24T04:59:30Z) - Efficient Test-Time Adaptation for Super-Resolution with Second-Order
Degradation and Reconstruction [62.955327005837475]
Image super-resolution (SR) aims to learn a mapping from low-resolution (LR) to high-resolution (HR) using paired HR-LR training images.
We present an efficient test-time adaptation framework for SR, named SRTTA, which is able to quickly adapt SR models to test domains with different/unknown degradation types.
arXiv Detail & Related papers (2023-10-29T13:58:57Z) - Class Anchor Margin Loss for Content-Based Image Retrieval [97.81742911657497]
We propose a novel repeller-attractor loss that falls in the metric learning paradigm, yet directly optimize for the L2 metric without the need of generating pairs.
We evaluate the proposed objective in the context of few-shot and full-set training on the CBIR task, by using both convolutional and transformer architectures.
arXiv Detail & Related papers (2023-06-01T12:53:10Z) - MrSARP: A Hierarchical Deep Generative Prior for SAR Image
Super-resolution [0.5161531917413706]
We present a novel hierarchical deep-generative model MrSARP for SAR imagery.
MrSARP is trained in conjunction with a critic that scores multi resolution images jointly to decide if they are realistic images of a target at different resolutions.
We show how this deep generative model can be used to retrieve the high spatial resolution image from low resolution images of the same target.
arXiv Detail & Related papers (2022-11-30T19:12:21Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Unpaired Image Super-Resolution with Optimal Transport Maps [128.1189695209663]
Real-world image super-resolution (SR) tasks often do not have paired datasets limiting the application of supervised techniques.
We propose an algorithm for unpaired SR which learns an unbiased OT map for the perceptual transport cost.
Our algorithm provides nearly state-of-the-art performance on the large-scale unpaired AIM-19 dataset.
arXiv Detail & Related papers (2022-02-02T16:21:20Z) - Gradient Variance Loss for Structure-Enhanced Image Super-Resolution [16.971608518924597]
We introduce a structure-enhancing loss function, coined Gradient Variance (GV) loss, and generate textures with perceptual-pleasant details.
Experimental results show that the GV loss can significantly improve both Structure Similarity (SSIM) and peak signal-to-noise ratio (PSNR) performance of existing image super-resolution (SR) deep learning models.
arXiv Detail & Related papers (2022-02-02T12:31:05Z) - Flexible Style Image Super-Resolution using Conditional Objective [11.830754741007029]
We present a more efficient method to train a single adjustable SR model on various combinations of losses by taking advantage of multi-task learning.
Specifically, we optimize an SR model with a conditional objective during training, where the objective is a weighted sum of multiple perceptual losses at different feature levels.
At the inference phase, our trained model can generate locally different outputs conditioned on the style control map.
arXiv Detail & Related papers (2022-01-13T11:39:29Z) - SIR: Self-supervised Image Rectification via Seeing the Same Scene from
Multiple Different Lenses [82.56853587380168]
We propose a novel self-supervised image rectification (SIR) method based on an important insight that the rectified results of distorted images of the same scene from different lens should be the same.
We leverage a differentiable warping module to generate the rectified images and re-distorted images from the distortion parameters.
Our method achieves comparable or even better performance than the supervised baseline method and representative state-of-the-art methods.
arXiv Detail & Related papers (2020-11-30T08:23:25Z) - Characteristic Regularisation for Super-Resolving Face Images [81.84939112201377]
Existing facial image super-resolution (SR) methods focus mostly on improving artificially down-sampled low-resolution (LR) imagery.
Previous unsupervised domain adaptation (UDA) methods address this issue by training a model using unpaired genuine LR and HR data.
This renders the model overstretched with two tasks: consistifying the visual characteristics and enhancing the image resolution.
We formulate a method that joins the advantages of conventional SR and UDA models.
arXiv Detail & Related papers (2019-12-30T16:27:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.