Exploring the Naturalness of AI-Generated Images
- URL: http://arxiv.org/abs/2312.05476v3
- Date: Mon, 4 Mar 2024 13:30:44 GMT
- Title: Exploring the Naturalness of AI-Generated Images
- Authors: Zijian Chen, Wei Sun, Haoning Wu, Zicheng Zhang, Jun Jia, Zhongpeng
Ji, Fengyu Sun, Shangling Jui, Xiongkuo Min, Guangtao Zhai, Wenjun Zhang
- Abstract summary: We take the first step to benchmark and assess the visual naturalness of AI-generated images.
We propose the Joint Objective Image Naturalness evaluaTor (JOINT), to automatically predict the naturalness of AGIs that aligns human ratings.
We demonstrate that JOINT significantly outperforms baselines for providing more subjectively consistent results on naturalness assessment.
- Score: 59.04528584651131
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proliferation of Artificial Intelligence-Generated Images (AGIs) has
greatly expanded the Image Naturalness Assessment (INA) problem. Different from
early definitions that mainly focus on tone-mapped images with limited
distortions (e.g., exposure, contrast, and color reproduction), INA on
AI-generated images is especially challenging as it has more diverse contents
and could be affected by factors from multiple perspectives, including
low-level technical distortions and high-level rationality distortions. In this
paper, we take the first step to benchmark and assess the visual naturalness of
AI-generated images. First, we construct the AI-Generated Image Naturalness
(AGIN) database by conducting a large-scale subjective study to collect human
opinions on the overall naturalness as well as perceptions from technical and
rationality perspectives. AGIN verifies that naturalness is universally and
disparately affected by technical and rationality distortions. Second, we
propose the Joint Objective Image Naturalness evaluaTor (JOINT), to
automatically predict the naturalness of AGIs that aligns human ratings.
Specifically, JOINT imitates human reasoning in naturalness evaluation by
jointly learning both technical and rationality features. We demonstrate that
JOINT significantly outperforms baselines for providing more subjectively
consistent results on naturalness assessment.
Related papers
- ANID: How Far Are We? Evaluating the Discrepancies Between AI-synthesized Images and Natural Images through Multimodal Guidance [19.760989919485894]
We introduce an AI-Natural Image Discrepancy Evaluation benchmark aimed at addressing the critical question: textithow far are AI-generated images from truly realistic images?
We have constructed a large-scale multimodal dataset, the Distinguishing Natural and AI-generated Images (DNAI) dataset, which includes over 440,000 AIGI samples generated by 8 representative models.
Our fine-grained assessment framework provides a comprehensive evaluation of the DNAI dataset across five key dimensions.
arXiv Detail & Related papers (2024-12-23T15:08:08Z) - Detecting Discrepancies Between AI-Generated and Natural Images Using Uncertainty [91.64626435585643]
We propose a novel approach for detecting AI-generated images by leveraging predictive uncertainty to mitigate misuse and associated risks.
The motivation arises from the fundamental assumption regarding the distributional discrepancy between natural and AI-generated images.
We propose to leverage large-scale pre-trained models to calculate the uncertainty as the score for detecting AI-generated images.
arXiv Detail & Related papers (2024-12-08T11:32:25Z) - Orthogonal Subspace Decomposition for Generalizable AI-Generated Image Detection [58.87142367781417]
A naively trained detector tends to favor overfitting to the limited and monotonous fake patterns, causing the feature space to become highly constrained and low-ranked.
One potential remedy is incorporating the pre-trained knowledge within the vision foundation models to expand the feature space.
By freezing the principal components and adapting only the remained components, we preserve the pre-trained knowledge while learning forgery-related patterns.
arXiv Detail & Related papers (2024-11-23T19:10:32Z) - Analysis of Human Perception in Distinguishing Real and AI-Generated Faces: An Eye-Tracking Based Study [6.661332913985627]
We investigate how humans perceive and distinguish between real and fake images.
Our analysis of StyleGAN-3 generated images reveals that participants can distinguish real from fake faces with an average accuracy of 76.80%.
arXiv Detail & Related papers (2024-09-23T19:34:30Z) - Visual Verity in AI-Generated Imagery: Computational Metrics and Human-Centric Analysis [0.0]
We introduce and validated a questionnaire called Visual Verity, which measures photorealism, image quality, and text-image alignment.
We also analyzed statistical properties, finding that camera-generated images scored lower in hue, saturation, and brightness.
Our findings highlight the need for refining computational metrics to better capture human visual perception.
arXiv Detail & Related papers (2024-08-22T23:29:07Z) - RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Quality Assessment for AI Generated Images with Instruction Tuning [58.41087653543607]
We first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+.
This paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning.
arXiv Detail & Related papers (2024-05-12T17:45:11Z) - AIGCOIQA2024: Perceptual Quality Assessment of AI Generated Omnidirectional Images [70.42666704072964]
We establish a large-scale AI generated omnidirectional image IQA database named AIGCOIQA2024.
A subjective IQA experiment is conducted to assess human visual preferences from three perspectives.
We conduct a benchmark experiment to evaluate the performance of state-of-the-art IQA models on our database.
arXiv Detail & Related papers (2024-04-01T10:08:23Z) - PKU-I2IQA: An Image-to-Image Quality Assessment Database for AI
Generated Images [1.6031185986328562]
We establish a human perception-based image-to-image AIGCIQA database, named PKU-I2IQA.
We propose two benchmark models: NR-AIGCIQA based on the no-reference image quality assessment method and FR-AIGCIQA based on the full-reference image quality assessment method.
arXiv Detail & Related papers (2023-11-27T05:53:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.