Perception-Distortion Balanced Super-Resolution: A Multi-Objective Optimization Perspective
- URL: http://arxiv.org/abs/2312.15408v2
- Date: Tue, 24 Sep 2024 08:23:43 GMT
- Title: Perception-Distortion Balanced Super-Resolution: A Multi-Objective Optimization Perspective
- Authors: Lingchen Sun, Jie Liang, Shuaizheng Liu, Hongwei Yong, Lei Zhang,
- Abstract summary: High perceptual quality and low distortion degree are important goals in image restoration tasks such as super-resolution (SR)
Current gradient-based methods are hard to balance these objectives due to the opposite gradient directions of the contradictory losses.
In this paper, we formulate the perception-distortion trade-off in SR as a multi-objective optimization problem and develop a new by integrating the gradient-free evolutionary algorithm (EA) with gradient-based Adam.
- Score: 16.762410459930006
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High perceptual quality and low distortion degree are two important goals in image restoration tasks such as super-resolution (SR). Most of the existing SR methods aim to achieve these goals by minimizing the corresponding yet conflicting losses, such as the $\ell_1$ loss and the adversarial loss. Unfortunately, the commonly used gradient-based optimizers, such as Adam, are hard to balance these objectives due to the opposite gradient decent directions of the contradictory losses. In this paper, we formulate the perception-distortion trade-off in SR as a multi-objective optimization problem and develop a new optimizer by integrating the gradient-free evolutionary algorithm (EA) with gradient-based Adam, where EA and Adam focus on the divergence and convergence of the optimization directions respectively. As a result, a population of optimal models with different perception-distortion preferences is obtained. We then design a fusion network to merge these models into a single stronger one for an effective perception-distortion trade-off. Experiments demonstrate that with the same backbone network, the perception-distortion balanced SR model trained by our method can achieve better perceptual quality than its competitors while attaining better reconstruction fidelity. Codes and models can be found at https://github.com/csslc/EA-Adam}{https://github.com/csslc/EA-Adam.
Related papers
- Self-supervised Preference Optimization: Enhance Your Language Model with Preference Degree Awareness [27.43137305486112]
We propose a novel Self-supervised Preference Optimization (SPO) framework, which constructs a self-supervised preference degree loss combined with the alignment loss.
The results demonstrate that SPO can be seamlessly integrated with existing preference optimization methods to achieve state-of-the-art performance.
arXiv Detail & Related papers (2024-09-26T12:37:26Z) - Perceptual-Distortion Balanced Image Super-Resolution is a Multi-Objective Optimization Problem [23.833099288826045]
Training Single-Image Super-Resolution (SISR) models using pixel-based regression losses can achieve high distortion metrics scores.
However, they often results in blurry images due to insufficient recovery of high-frequency details.
We propose a novel method that incorporates Multi-Objective Optimization (MOO) into the training process of SISR models to balance perceptual quality and distortion.
arXiv Detail & Related papers (2024-09-05T02:14:04Z) - A Simple and Generalist Approach for Panoptic Segmentation [57.94892855772925]
Generalist vision models aim for one and the same architecture for a variety of vision tasks.
While such shared architecture may seem attractive, generalist models tend to be outperformed by their bespoken counterparts.
We address this problem by introducing two key contributions, without compromising the desirable properties of generalist models.
arXiv Detail & Related papers (2024-08-29T13:02:12Z) - Gradient constrained sharpness-aware prompt learning for vision-language
models [99.74832984957025]
This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM)
By analyzing the loss landscapes of the state-of-the-art method and vanilla Sharpness-aware Minimization (SAM) based method, we conclude that the trade-off performance correlates to both loss value and loss sharpness.
We propose a novel SAM-based method for prompt learning, denoted as Gradient Constrained Sharpness-aware Context Optimization (GCSCoOp)
arXiv Detail & Related papers (2023-09-14T17:13:54Z) - Perception-Oriented Single Image Super-Resolution using Optimal
Objective Estimation [11.830754741007029]
We propose a new SISR framework that applies optimal objectives for each region to generate plausible results in overall areas of high-resolution outputs.
The framework comprises two models: a predictive model that infers an optimal objective map for a given low-resolution (LR) input and a generative model that applies a target objective map to produce the corresponding SR output.
arXiv Detail & Related papers (2022-11-24T15:45:03Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - RL-PGO: Reinforcement Learning-based Planar Pose-Graph Optimization [1.4884785898657995]
This paper presents a state-of-the-art Deep Reinforcement Learning (DRL) based environment and proposed agent for 2D pose-graph optimization.
We demonstrate that the pose-graph optimization problem can be modeled as a partially observable Decision Process and evaluate performance on real-world and synthetic datasets.
arXiv Detail & Related papers (2022-02-26T20:10:14Z) - Gradient Variance Loss for Structure-Enhanced Image Super-Resolution [16.971608518924597]
We introduce a structure-enhancing loss function, coined Gradient Variance (GV) loss, and generate textures with perceptual-pleasant details.
Experimental results show that the GV loss can significantly improve both Structure Similarity (SSIM) and peak signal-to-noise ratio (PSNR) performance of existing image super-resolution (SR) deep learning models.
arXiv Detail & Related papers (2022-02-02T12:31:05Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - Normalizing Flow as a Flexible Fidelity Objective for Photo-Realistic
Super-resolution [161.39504409401354]
Super-resolution is an ill-posed problem, where a ground-truth high-resolution image represents only one possibility in the space of plausible solutions.
Yet, the dominant paradigm is to employ pixel-wise losses, such as L_, which drive the prediction towards a blurry average.
We address this issue by revisiting the L_ loss and show that it corresponds to a one-layer conditional flow.
Inspired by this relation, we explore general flows as a fidelity-based alternative to the L_ objective.
We demonstrate that the flexibility of deeper flows leads to better visual quality and consistency when combined with adversarial losses.
arXiv Detail & Related papers (2021-11-05T17:56:51Z) - Understanding the Generalization of Adam in Learning Neural Networks
with Proper Regularization [118.50301177912381]
We show that Adam can converge to different solutions of the objective with provably different errors, even with weight decay globalization.
We show that if convex, and the weight decay regularization is employed, any optimization algorithms including Adam will converge to the same solution.
arXiv Detail & Related papers (2021-08-25T17:58:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.