Perception-Distortion Balanced Super-Resolution: A Multi-Objective
Optimization Perspective
- URL: http://arxiv.org/abs/2312.15408v1
- Date: Sun, 24 Dec 2023 04:59:30 GMT
- Title: Perception-Distortion Balanced Super-Resolution: A Multi-Objective
Optimization Perspective
- Authors: Lingchen Sun, Jie Liang, Shuaizheng Liu, Hongwei Yong, Lei Zhang
- Abstract summary: We formulate the perception-distortion trade-off in SR as a multi-objective optimization problem.
We develop a new by integrating the gradient-free evolutionary algorithm (EA) with gradient-based Adam.
As a result, a population of optimal models with different perception-distortion preferences is obtained.
Experiments demonstrate that with the same backbone network, the perception-distortion SR model trained by our method can achieve better perceptual quality than its competitors.
- Score: 17.98348424312597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High perceptual quality and low distortion degree are two important goals in
image restoration tasks such as super-resolution (SR). Most of the existing SR
methods aim to achieve these goals by minimizing the corresponding yet
conflicting losses, such as the $\ell_1$ loss and the adversarial loss.
Unfortunately, the commonly used gradient-based optimizers, such as Adam, are
hard to balance these objectives due to the opposite gradient decent directions
of the contradictory losses. In this paper, we formulate the
perception-distortion trade-off in SR as a multi-objective optimization problem
and develop a new optimizer by integrating the gradient-free evolutionary
algorithm (EA) with gradient-based Adam, where EA and Adam focus on the
divergence and convergence of the optimization directions respectively. As a
result, a population of optimal models with different perception-distortion
preferences is obtained. We then design a fusion network to merge these models
into a single stronger one for an effective perception-distortion trade-off.
Experiments demonstrate that with the same backbone network, the
perception-distortion balanced SR model trained by our method can achieve
better perceptual quality than its competitors while attaining better
reconstruction fidelity. Codes and models can be found at
https://github.com/csslc/EA-Adam.
Related papers
- Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Gradient constrained sharpness-aware prompt learning for vision-language
models [99.74832984957025]
This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM)
By analyzing the loss landscapes of the state-of-the-art method and vanilla Sharpness-aware Minimization (SAM) based method, we conclude that the trade-off performance correlates to both loss value and loss sharpness.
We propose a novel SAM-based method for prompt learning, denoted as Gradient Constrained Sharpness-aware Context Optimization (GCSCoOp)
arXiv Detail & Related papers (2023-09-14T17:13:54Z) - Perception-Oriented Single Image Super-Resolution using Optimal
Objective Estimation [11.830754741007029]
We propose a new SISR framework that applies optimal objectives for each region to generate plausible results in overall areas of high-resolution outputs.
The framework comprises two models: a predictive model that infers an optimal objective map for a given low-resolution (LR) input and a generative model that applies a target objective map to produce the corresponding SR output.
arXiv Detail & Related papers (2022-11-24T15:45:03Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - RL-PGO: Reinforcement Learning-based Planar Pose-Graph Optimization [1.4884785898657995]
This paper presents a state-of-the-art Deep Reinforcement Learning (DRL) based environment and proposed agent for 2D pose-graph optimization.
We demonstrate that the pose-graph optimization problem can be modeled as a partially observable Decision Process and evaluate performance on real-world and synthetic datasets.
arXiv Detail & Related papers (2022-02-26T20:10:14Z) - Gradient Variance Loss for Structure-Enhanced Image Super-Resolution [16.971608518924597]
We introduce a structure-enhancing loss function, coined Gradient Variance (GV) loss, and generate textures with perceptual-pleasant details.
Experimental results show that the GV loss can significantly improve both Structure Similarity (SSIM) and peak signal-to-noise ratio (PSNR) performance of existing image super-resolution (SR) deep learning models.
arXiv Detail & Related papers (2022-02-02T12:31:05Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - Understanding the Generalization of Adam in Learning Neural Networks
with Proper Regularization [118.50301177912381]
We show that Adam can converge to different solutions of the objective with provably different errors, even with weight decay globalization.
We show that if convex, and the weight decay regularization is employed, any optimization algorithms including Adam will converge to the same solution.
arXiv Detail & Related papers (2021-08-25T17:58:21Z) - Characteristic Regularisation for Super-Resolving Face Images [81.84939112201377]
Existing facial image super-resolution (SR) methods focus mostly on improving artificially down-sampled low-resolution (LR) imagery.
Previous unsupervised domain adaptation (UDA) methods address this issue by training a model using unpaired genuine LR and HR data.
This renders the model overstretched with two tasks: consistifying the visual characteristics and enhancing the image resolution.
We formulate a method that joins the advantages of conventional SR and UDA models.
arXiv Detail & Related papers (2019-12-30T16:27:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.