Details or Artifacts: A Locally Discriminative Learning Approach to
Realistic Image Super-Resolution
- URL: http://arxiv.org/abs/2203.09195v1
- Date: Thu, 17 Mar 2022 09:35:50 GMT
- Title: Details or Artifacts: A Locally Discriminative Learning Approach to
Realistic Image Super-Resolution
- Authors: Jie Liang and Hui Zeng and Lei Zhang
- Abstract summary: Single image super-resolution (SISR) with generative adversarial networks (GAN) has recently attracted increasing attention due to its potentials to generate rich details.
In this paper, we demonstrate that it is possible to train a GAN-based SISR model which can stably generate perceptually realistic details while inhibiting visual artifacts.
- Score: 28.00231586840797
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Single image super-resolution (SISR) with generative adversarial networks
(GAN) has recently attracted increasing attention due to its potentials to
generate rich details. However, the training of GAN is unstable, and it often
introduces many perceptually unpleasant artifacts along with the generated
details. In this paper, we demonstrate that it is possible to train a GAN-based
SISR model which can stably generate perceptually realistic details while
inhibiting visual artifacts. Based on the observation that the local statistics
(e.g., residual variance) of artifact areas are often different from the areas
of perceptually friendly details, we develop a framework to discriminate
between GAN-generated artifacts and realistic details, and consequently
generate an artifact map to regularize and stabilize the model training
process. Our proposed locally discriminative learning (LDL) method is simple
yet effective, which can be easily plugged in off-the-shelf SISR methods and
boost their performance. Experiments demonstrate that LDL outperforms the
state-of-the-art GAN based SISR methods, achieving not only higher
reconstruction accuracy but also superior perceptual quality on both synthetic
and real-world datasets. Codes and models are available at
https://github.com/csjliang/LDL.
Related papers
- Towards Realistic Data Generation for Real-World Super-Resolution [58.88039242455039]
RealDGen is an unsupervised learning data generation framework designed for real-world super-resolution.
We develop content and degradation extraction strategies, which are integrated into a novel content-degradation decoupled diffusion model.
Experiments demonstrate that RealDGen excels in generating large-scale, high-quality paired data that mirrors real-world degradations.
arXiv Detail & Related papers (2024-06-11T13:34:57Z) - A General Method to Incorporate Spatial Information into Loss Functions for GAN-based Super-resolution Models [25.69505971220203]
Generative Adversarial Networks (GANs) have shown great performance on super-resolution problems.
GANs often introduce side effects into the outputs, such as unexpected artifacts and noises.
We propose a general method that can be effectively used in most GAN-based super-resolution (SR) models by introducing essential spatial information into the training process.
arXiv Detail & Related papers (2024-03-15T17:29:16Z) - Damage GAN: A Generative Model for Imbalanced Data [1.027461951217988]
This study explores the application of Generative Adversarial Networks (GANs) within the context of imbalanced datasets.
We introduce a novel network architecture known as Damage GAN, building upon the ContraD GAN framework which seamlessly integrates GANs and contrastive learning.
arXiv Detail & Related papers (2023-12-08T06:36:33Z) - DifAugGAN: A Practical Diffusion-style Data Augmentation for GAN-based
Single Image Super-resolution [88.13972071356422]
We propose a diffusion-style data augmentation scheme for GAN-based image super-resolution (SR) methods, known as DifAugGAN.
It involves adapting the diffusion process in generative diffusion models for improving the calibration of the discriminator during training.
Our DifAugGAN can be a Plug-and-Play strategy for current GAN-based SISR methods to improve the calibration of the discriminator and thus improve SR performance.
arXiv Detail & Related papers (2023-11-30T12:37:53Z) - DeSRA: Detect and Delete the Artifacts of GAN-based Real-World
Super-Resolution Models [41.60982753592467]
Image super-resolution (SR) with generative adversarial networks (GAN) has achieved great success in restoring realistic details.
It is notorious that GAN-based SR models will inevitably produce unpleasant and undesirable artifacts.
In this paper, we analyze the cause and characteristics of the GAN artifacts produced in unseen test data without ground-truths.
We develop a novel method, namely, DeSRA, to Detect and then Delete those SR Artifacts in practice.
arXiv Detail & Related papers (2023-07-05T17:31:44Z) - Intriguing Property and Counterfactual Explanation of GAN for Remote Sensing Image Generation [25.96740500337747]
Generative adversarial networks (GANs) have achieved remarkable progress in the natural image field.
GAN model is more sensitive to the size of training data for RS image generation than for natural image generation.
We propose two innovative adjustment schemes, namely Uniformity Regularization (UR) and Entropy Regularization (ER), to increase the information learned by the GAN model.
arXiv Detail & Related papers (2023-03-09T13:22:50Z) - Forward Super-Resolution: How Can GANs Learn Hierarchical Generative
Models for Real-World Distributions [66.05472746340142]
Generative networks (GAN) are among the most successful for learning high-complexity, real-world distributions.
In this paper we show how GANs can efficiently learn to the distribution of real-life images.
arXiv Detail & Related papers (2021-06-04T17:33:29Z) - Best-Buddy GANs for Highly Detailed Image Super-Resolution [71.13466303340192]
We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input.
Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task.
We propose best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision.
arXiv Detail & Related papers (2021-03-29T02:58:27Z) - On Leveraging Pretrained GANs for Generation with Limited Data [83.32972353800633]
generative adversarial networks (GANs) can generate highly realistic images, that are often indistinguishable (by humans) from real images.
Most images so generated are not contained in a training dataset, suggesting potential for augmenting training sets with GAN-generated data.
We leverage existing GAN models pretrained on large-scale datasets to introduce additional knowledge, following the concept of transfer learning.
An extensive set of experiments is presented to demonstrate the effectiveness of the proposed techniques on generation with limited data.
arXiv Detail & Related papers (2020-02-26T21:53:36Z) - High-Fidelity Synthesis with Disentangled Representation [60.19657080953252]
We propose an Information-Distillation Generative Adrial Network (ID-GAN) for disentanglement learning and high-fidelity synthesis.
Our method learns disentangled representation using VAE-based models, and distills the learned representation with an additional nuisance variable to the separate GAN-based generator for high-fidelity synthesis.
Despite the simplicity, we show that the proposed method is highly effective, achieving comparable image generation quality to the state-of-the-art methods using the disentangled representation.
arXiv Detail & Related papers (2020-01-13T14:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.