Characterizing Photorealism and Artifacts in Diffusion Model-Generated Images
- URL: http://arxiv.org/abs/2502.11989v1
- Date: Mon, 17 Feb 2025 16:28:15 GMT
- Title: Characterizing Photorealism and Artifacts in Diffusion Model-Generated Images
- Authors: Negar Kamali, Karyn Nakamura, Aakriti Kumar, Angelos Chatzimparmpas, Jessica Hullman, Matthew Groh,
- Abstract summary: Given the challenge to public trust in media posed by photorealistic AI-generated images, we conducted a large-scale experiment measuring human detection accuracy.
We find that scene complexity, artifact types within an image, display time of an image, and human curation of AI-generated images all play significant roles in how accurately people distinguish real from AI-generated images.
- Score: 13.097947037585671
- License:
- Abstract: Diffusion model-generated images can appear indistinguishable from authentic photographs, but these images often contain artifacts and implausibilities that reveal their AI-generated provenance. Given the challenge to public trust in media posed by photorealistic AI-generated images, we conducted a large-scale experiment measuring human detection accuracy on 450 diffusion-model generated images and 149 real images. Based on collecting 749,828 observations and 34,675 comments from 50,444 participants, we find that scene complexity of an image, artifact types within an image, display time of an image, and human curation of AI-generated images all play significant roles in how accurately people distinguish real from AI-generated images. Additionally, we propose a taxonomy characterizing artifacts often appearing in images generated by diffusion models. Our empirical observations and taxonomy offer nuanced insights into the capabilities and limitations of diffusion models to generate photorealistic images in 2024.
Related papers
- DiffDoctor: Diagnosing Image Diffusion Models Before Treating [57.82359018425674]
We propose DiffDoctor, a two-stage pipeline to assist image diffusion models in generating fewer artifacts.
We collect a dataset of over 1M flawed synthesized images and set up an efficient human-in-the-loop annotation process.
The learned artifact detector is then involved in the second stage to tune the diffusion model through assigning a per-pixel confidence map for each image.
arXiv Detail & Related papers (2025-01-21T18:56:41Z) - Semi-Truths: A Large-Scale Dataset of AI-Augmented Images for Evaluating Robustness of AI-Generated Image detectors [62.63467652611788]
We introduce SEMI-TRUTHS, featuring 27,600 real images, 223,400 masks, and 1,472,700 AI-augmented images.
Each augmented image is accompanied by metadata for standardized and targeted evaluation of detector robustness.
Our findings suggest that state-of-the-art detectors exhibit varying sensitivities to the types and degrees of perturbations, data distributions, and augmentation methods used.
arXiv Detail & Related papers (2024-11-12T01:17:27Z) - Crafting Synthetic Realities: Examining Visual Realism and Misinformation Potential of Photorealistic AI-Generated Images [6.308018793111589]
This study unpacks AI photorealism of AIGIs from four key dimensions, content, human, aesthetic, and production features.
photorealistic AIGIs often depict human figures, especially celebrities and politicians, with a high degree of surrealism and aesthetic professionalism.
arXiv Detail & Related papers (2024-09-26T02:46:43Z) - Generating Realistic X-ray Scattering Images Using Stable Diffusion and Human-in-the-loop Annotations [42.47750355293256]
We fine-tuned a foundational stable diffusion model to generate new scientific images from given prompts.
Some of the generated images exhibit significant unrealistic artifacts, commonly known as "hallucinations"
We trained various computer vision models on a dataset composed of 60% human-approved generated images and 40% experimental images to detect unrealistic images.
arXiv Detail & Related papers (2024-08-22T20:23:04Z) - How to Distinguish AI-Generated Images from Authentic Photographs [13.878791907839691]
Guide reveals five categories of artifacts and implausibilities that often appear in AI-generated images.
We generated 138 images with diffusion models, curated 9 images from social media, and curated 42 real photographs.
By drawing attention to these kinds of artifacts and implausibilities, we aim to better equip people to distinguish AI-generated images from real photographs.
arXiv Detail & Related papers (2024-06-12T21:23:27Z) - The Adversarial AI-Art: Understanding, Generation, Detection, and Benchmarking [47.08666835021915]
We present a systematic attempt at understanding and detecting AI-generated images (AI-art) in adversarial scenarios.
The dataset, named ARIA, contains over 140K images in five categories: artworks (painting), social media images, news photos, disaster scenes, and anime pictures.
arXiv Detail & Related papers (2024-04-22T21:00:13Z) - Let Real Images be as a Judger, Spotting Fake Images Synthesized with Generative Models [16.900526163168827]
We study the artifact patterns in fake images synthesized by different generative models.
In this paper, we employ natural traces shared only by real images as an additional predictive target in the detector.
Our proposed method gives 96.1% mAP significantly outperforms the baselines.
arXiv Detail & Related papers (2024-03-25T07:58:58Z) - Unveiling the Truth: Exploring Human Gaze Patterns in Fake Images [34.02058539403381]
We leverage human semantic knowledge to investigate the possibility of being included in frameworks of fake image detection.
A preliminary statistical analysis is conducted to explore the distinctive patterns in how humans perceive genuine and altered images.
arXiv Detail & Related papers (2024-03-13T19:56:30Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - On quantifying and improving realism of images generated with diffusion [50.37578424163951]
We propose a metric, called Image Realism Score (IRS), computed from five statistical measures of a given image.
IRS is easily usable as a measure to classify a given image as real or fake.
We experimentally establish the model- and data-agnostic nature of the proposed IRS by successfully detecting fake images generated by Stable Diffusion Model (SDM), Dalle2, Midjourney and BigGAN.
Our efforts have also led to Gen-100 dataset, which provides 1,000 samples for 100 classes generated by four high-quality models.
arXiv Detail & Related papers (2023-09-26T08:32:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.