PSCR: Patches Sampling-based Contrastive Regression for AIGC Image
Quality Assessment
- URL: http://arxiv.org/abs/2312.05897v1
- Date: Sun, 10 Dec 2023 14:18:53 GMT
- Title: PSCR: Patches Sampling-based Contrastive Regression for AIGC Image
Quality Assessment
- Authors: Jiquan Yuan, Xinyan Cao, Linjing Cao, Jinlong Lin, and Xixin Cao
- Abstract summary: We propose a contrastive regression framework to leverage differences among various generated images for learning a better representation space.
We conduct extensive experiments on three mainstream AIGCIQA databases including AGIQA-1K, AGIQA-3K and AIGCIQA2023.
Results show significant improvements in model performance with the introduction of our proposed PSCR framework.
- Score: 1.1744028458220428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, Artificial Intelligence Generated Content (AIGC) has gained
widespread attention beyond the computer science community. Due to various
issues arising from continuous creation of AI-generated images (AIGI), AIGC
image quality assessment (AIGCIQA), which aims to evaluate the quality of AIGIs
from human perception perspectives, has emerged as a novel topic in the field
of computer vision. However, most existing AIGCIQA methods directly regress
predicted scores from a single generated image, overlooking the inherent
differences among AIGIs and scores. Additionally, operations like resizing and
cropping may cause global geometric distortions and information loss, thus
limiting the performance of models. To address these issues, we propose a
patches sampling-based contrastive regression (PSCR) framework. We suggest
introducing a contrastive regression framework to leverage differences among
various generated images for learning a better representation space. In this
space, differences and score rankings among images can be measured by their
relative scores. By selecting exemplar AIGIs as references, we also overcome
the limitations of previous models that could not utilize reference images on
the no-reference image databases. To avoid geometric distortions and
information loss in image inputs, we further propose a patches sampling
strategy. To demonstrate the effectiveness of our proposed PSCR framework, we
conduct extensive experiments on three mainstream AIGCIQA databases including
AGIQA-1K, AGIQA-3K and AIGCIQA2023. The results show significant improvements
in model performance with the introduction of our proposed PSCR framework. Code
will be available at \url{https://github.com/jiquan123/PSCR}.
Related papers
- A Survey on All-in-One Image Restoration: Taxonomy, Evaluation and Future Trends [67.43992456058541]
Image restoration (IR) refers to the process of improving visual quality of images while removing degradation, such as noise, blur, weather effects, and so on.
Traditional IR methods typically target specific types of degradation, which limits their effectiveness in real-world scenarios with complex distortions.
The all-in-one image restoration (AiOIR) paradigm has emerged, offering a unified framework that adeptly addresses multiple degradation types.
arXiv Detail & Related papers (2024-10-19T11:11:09Z) - A Sanity Check for AI-generated Image Detection [49.08585395873425]
We present a sanity check on whether the task of AI-generated image detection has been solved.
To quantify the generalization of existing methods, we evaluate 9 off-the-shelf AI-generated image detectors on Chameleon dataset.
We propose AIDE (AI-generated Image DEtector with Hybrid Features), which leverages multiple experts to simultaneously extract visual artifacts and noise patterns.
arXiv Detail & Related papers (2024-06-27T17:59:49Z) - Improving Interpretability and Robustness for the Detection of AI-Generated Images [6.116075037154215]
We analyze existing state-of-the-art AIGI detection methods based on frozen CLIP embeddings.
We show how to interpret them, shedding light on how images produced by various AI generators differ from real ones.
arXiv Detail & Related papers (2024-06-21T10:33:09Z) - PKU-AIGIQA-4K: A Perceptual Quality Assessment Database for Both Text-to-Image and Image-to-Image AI-Generated Images [1.5265677582796984]
We establish a large scale perceptual quality assessment database for both text-to-image and image-to-image AIGIs, named PKU-AIGIQA-4K.
We propose three image quality assessment (IQA) methods based on pre-trained models that include a no-reference method NR-AIGCIQA, a full-reference method FR-AIGCIQA, and a partial-reference method PR-AIGCIQA.
arXiv Detail & Related papers (2024-04-29T03:57:43Z) - AIGCOIQA2024: Perceptual Quality Assessment of AI Generated Omnidirectional Images [70.42666704072964]
We establish a large-scale AI generated omnidirectional image IQA database named AIGCOIQA2024.
A subjective IQA experiment is conducted to assess human visual preferences from three perspectives.
We conduct a benchmark experiment to evaluate the performance of state-of-the-art IQA models on our database.
arXiv Detail & Related papers (2024-04-01T10:08:23Z) - TIER: Text-Image Encoder-based Regression for AIGC Image Quality
Assessment [2.59079758388817]
In AIGCIQA tasks, images are typically generated by generative models using text prompts.
Most existing AIGCIQA methods regress predicted scores directly from individual generated images.
We propose a text-image encoder-based regression (TIER) framework to address this issue.
arXiv Detail & Related papers (2024-01-08T12:35:15Z) - PKU-I2IQA: An Image-to-Image Quality Assessment Database for AI
Generated Images [1.6031185986328562]
We establish a human perception-based image-to-image AIGCIQA database, named PKU-I2IQA.
We propose two benchmark models: NR-AIGCIQA based on the no-reference image quality assessment method and FR-AIGCIQA based on the full-reference image quality assessment method.
arXiv Detail & Related papers (2023-11-27T05:53:03Z) - ARNIQA: Learning Distortion Manifold for Image Quality Assessment [28.773037051085318]
No-Reference Image Quality Assessment (NR-IQA) aims to develop methods to measure image quality in alignment with human perception without the need for a high-quality reference image.
We propose a self-supervised approach named ARNIQA for modeling the image distortion manifold to obtain quality representations in an intrinsic manner.
arXiv Detail & Related papers (2023-10-20T17:22:25Z) - AGIQA-3K: An Open Database for AI-Generated Image Quality Assessment [62.8834581626703]
We build the most comprehensive subjective quality database AGIQA-3K so far.
We conduct a benchmark experiment on this database to evaluate the consistency between the current Image Quality Assessment (IQA) model and human perception.
We believe that the fine-grained subjective scores in AGIQA-3K will inspire subsequent AGI quality models to fit human subjective perception mechanisms.
arXiv Detail & Related papers (2023-06-07T18:28:21Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.