Exploring the Naturalness of AI-Generated Images
- URL: http://arxiv.org/abs/2312.05476v3
- Date: Mon, 4 Mar 2024 13:30:44 GMT
- Title: Exploring the Naturalness of AI-Generated Images
- Authors: Zijian Chen, Wei Sun, Haoning Wu, Zicheng Zhang, Jun Jia, Zhongpeng
Ji, Fengyu Sun, Shangling Jui, Xiongkuo Min, Guangtao Zhai, Wenjun Zhang
- Abstract summary: We take the first step to benchmark and assess the visual naturalness of AI-generated images.
We propose the Joint Objective Image Naturalness evaluaTor (JOINT), to automatically predict the naturalness of AGIs that aligns human ratings.
We demonstrate that JOINT significantly outperforms baselines for providing more subjectively consistent results on naturalness assessment.
- Score: 59.04528584651131
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proliferation of Artificial Intelligence-Generated Images (AGIs) has
greatly expanded the Image Naturalness Assessment (INA) problem. Different from
early definitions that mainly focus on tone-mapped images with limited
distortions (e.g., exposure, contrast, and color reproduction), INA on
AI-generated images is especially challenging as it has more diverse contents
and could be affected by factors from multiple perspectives, including
low-level technical distortions and high-level rationality distortions. In this
paper, we take the first step to benchmark and assess the visual naturalness of
AI-generated images. First, we construct the AI-Generated Image Naturalness
(AGIN) database by conducting a large-scale subjective study to collect human
opinions on the overall naturalness as well as perceptions from technical and
rationality perspectives. AGIN verifies that naturalness is universally and
disparately affected by technical and rationality distortions. Second, we
propose the Joint Objective Image Naturalness evaluaTor (JOINT), to
automatically predict the naturalness of AGIs that aligns human ratings.
Specifically, JOINT imitates human reasoning in naturalness evaluation by
jointly learning both technical and rationality features. We demonstrate that
JOINT significantly outperforms baselines for providing more subjectively
consistent results on naturalness assessment.
Related papers
- Analysis of Human Perception in Distinguishing Real and AI-Generated Faces: An Eye-Tracking Based Study [6.661332913985627]
We investigate how humans perceive and distinguish between real and fake images.
Our analysis of StyleGAN-3 generated images reveals that participants can distinguish real from fake faces with an average accuracy of 76.80%.
arXiv Detail & Related papers (2024-09-23T19:34:30Z) - Visual Verity in AI-Generated Imagery: Computational Metrics and Human-Centric Analysis [0.0]
We introduce and validated a questionnaire called Visual Verity, which measures photorealism, image quality, and text-image alignment.
We also analyzed statistical properties, finding that camera-generated images scored lower in hue, saturation, and brightness.
Our findings highlight the need for refining computational metrics to better capture human visual perception.
arXiv Detail & Related papers (2024-08-22T23:29:07Z) - A Sanity Check for AI-generated Image Detection [49.08585395873425]
We present a sanity check on whether the task of AI-generated image detection has been solved.
To quantify the generalization of existing methods, we evaluate 9 off-the-shelf AI-generated image detectors on Chameleon dataset.
We propose AIDE (AI-generated Image DEtector with Hybrid Features), which leverages multiple experts to simultaneously extract visual artifacts and noise patterns.
arXiv Detail & Related papers (2024-06-27T17:59:49Z) - RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Understanding and Evaluating Human Preferences for AI Generated Images with Instruction Tuning [58.41087653543607]
We first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+.
This paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning.
arXiv Detail & Related papers (2024-05-12T17:45:11Z) - AIGCOIQA2024: Perceptual Quality Assessment of AI Generated Omnidirectional Images [70.42666704072964]
We establish a large-scale AI generated omnidirectional image IQA database named AIGCOIQA2024.
A subjective IQA experiment is conducted to assess human visual preferences from three perspectives.
We conduct a benchmark experiment to evaluate the performance of state-of-the-art IQA models on our database.
arXiv Detail & Related papers (2024-04-01T10:08:23Z) - PKU-I2IQA: An Image-to-Image Quality Assessment Database for AI
Generated Images [1.6031185986328562]
We establish a human perception-based image-to-image AIGCIQA database, named PKU-I2IQA.
We propose two benchmark models: NR-AIGCIQA based on the no-reference image quality assessment method and FR-AIGCIQA based on the full-reference image quality assessment method.
arXiv Detail & Related papers (2023-11-27T05:53:03Z) - Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images [66.20578637253831]
There is a growing concern that the advancement of artificial intelligence (AI) technology may produce fake photos.
This study aims to comprehensively evaluate agents for distinguishing state-of-the-art AI-generated visual content.
arXiv Detail & Related papers (2023-04-25T17:51:59Z) - The Value of AI Guidance in Human Examination of Synthetically-Generated
Faces [4.144518961834414]
We investigate whether human-guided synthetic face detectors can assist non-expert human operators in the task of synthetic image detection.
We conducted a large-scale experiment with more than 1,560 subjects classifying whether an image shows an authentic or synthetically-generated face.
Models trained with human-guidance offer better support to human examination of face images when compared to models trained traditionally using cross-entropy loss.
arXiv Detail & Related papers (2022-08-22T18:45:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.