G-Refine: A General Quality Refiner for Text-to-Image Generation
- URL: http://arxiv.org/abs/2404.18343v1
- Date: Mon, 29 Apr 2024 00:54:38 GMT
- Title: G-Refine: A General Quality Refiner for Text-to-Image Generation
- Authors: Chunyi Li, Haoning Wu, Hongkun Hao, Zicheng Zhang, Tengchaun Kou, Chaofeng Chen, Lei Bai, Xiaohong Liu, Weisi Lin, Guangtao Zhai,
- Abstract summary: We introduce G-Refine, a general image quality refiner designed to enhance low-quality images without compromising integrity of high-quality ones.
The model is composed of three interconnected modules: a perception quality indicator, an alignment quality indicator, and a general quality enhancement module.
Extensive experimentation reveals that AIGIs after G-Refine outperform in 10+ quality metrics across 4 databases.
- Score: 74.16137826891827
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the evolution of Text-to-Image (T2I) models, the quality defects of AI-Generated Images (AIGIs) pose a significant barrier to their widespread adoption. In terms of both perception and alignment, existing models cannot always guarantee high-quality results. To mitigate this limitation, we introduce G-Refine, a general image quality refiner designed to enhance low-quality images without compromising the integrity of high-quality ones. The model is composed of three interconnected modules: a perception quality indicator, an alignment quality indicator, and a general quality enhancement module. Based on the mechanisms of the Human Visual System (HVS) and syntax trees, the first two indicators can respectively identify the perception and alignment deficiencies, and the last module can apply targeted quality enhancement accordingly. Extensive experimentation reveals that when compared to alternative optimization methods, AIGIs after G-Refine outperform in 10+ quality metrics across 4 databases. This improvement significantly contributes to the practical application of contemporary T2I models, paving the way for their broader adoption. The code will be released on https://github.com/Q-Future/Q-Refine.
Related papers
- Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - Q-Refine: A Perceptual Quality Refiner for AI-Generated Image [85.89840673640028]
A quality-award refiner named Q-Refine is proposed.
It uses the Image Quality Assessment (IQA) metric to guide the refining process for the first time.
It can be a general refiner to optimize AIGIs from both fidelity and aesthetic quality levels.
arXiv Detail & Related papers (2024-01-02T09:11:23Z) - Learning Generalizable Perceptual Representations for Data-Efficient
No-Reference Image Quality Assessment [7.291687946822539]
A major drawback of state-of-the-art NR-IQA techniques is their reliance on a large number of human annotations.
We enable the learning of low-level quality features to distortion types by introducing a novel quality-aware contrastive loss.
We design zero-shot quality predictions from both pathways in a completely blind setting.
arXiv Detail & Related papers (2023-12-08T05:24:21Z) - AGIQA-3K: An Open Database for AI-Generated Image Quality Assessment [62.8834581626703]
We build the most comprehensive subjective quality database AGIQA-3K so far.
We conduct a benchmark experiment on this database to evaluate the consistency between the current Image Quality Assessment (IQA) model and human perception.
We believe that the fine-grained subjective scores in AGIQA-3K will inspire subsequent AGI quality models to fit human subjective perception mechanisms.
arXiv Detail & Related papers (2023-06-07T18:28:21Z) - Re-IQA: Unsupervised Learning for Image Quality Assessment in the Wild [38.197794061203055]
We propose a Mixture of Experts approach to train two separate encoders to learn high-level content and low-level image quality features in an unsupervised setting.
We deploy the complementary low and high-level image representations obtained from the Re-IQA framework to train a linear regression model.
Our method achieves state-of-the-art performance on multiple large-scale image quality assessment databases.
arXiv Detail & Related papers (2023-04-02T05:06:51Z) - GR-GAN: Gradual Refinement Text-to-image Generation [15.99543073122574]
This paper proposes a Gradual Refinement Generative Adversarial Network (GR-GAN) to alleviate the problem efficiently.
A GRG module is designed to generate images from low resolution to high resolution with the corresponding text constraints.
A ITM module is designed to provide image-text matching losses at both sentence-image level and word-region level.
arXiv Detail & Related papers (2022-05-23T12:42:04Z) - Generalized Visual Quality Assessment of GAN-Generated Face Images [79.47386781978531]
We study the subjective and objective quality towards generalized quality assessment of GAN-generated face images (GFIs)
We develop a quality assessment model that is able to deliver accurate quality predictions for GFIs from both available and unseen GAN algorithms.
arXiv Detail & Related papers (2022-01-28T07:54:49Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.