Organic or Diffused: Can We Distinguish Human Art from AI-generated Images?
- URL: http://arxiv.org/abs/2402.03214v3
- Date: Tue, 2 Jul 2024 20:22:14 GMT
- Title: Organic or Diffused: Can We Distinguish Human Art from AI-generated Images?
- Authors: Anna Yoo Jeong Ha, Josephine Passananti, Ronik Bhaskar, Shawn Shan, Reid Southen, Haitao Zheng, Ben Y. Zhao,
- Abstract summary: Distinguishing AI generated images from human art is a challenging problem.
A failure to address this problem allows bad actors to defraud individuals paying a premium for human art and companies whose stated policies forbid AI imagery.
We curate real human art across 7 styles, generate matching images from 5 generative models, and apply 8 detectors.
- Score: 24.417027069545117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advent of generative AI images has completely disrupted the art world. Distinguishing AI generated images from human art is a challenging problem whose impact is growing over time. A failure to address this problem allows bad actors to defraud individuals paying a premium for human art and companies whose stated policies forbid AI imagery. It is also critical for content owners to establish copyright, and for model trainers interested in curating training data in order to avoid potential model collapse. There are several different approaches to distinguishing human art from AI images, including classifiers trained by supervised learning, research tools targeting diffusion models, and identification by professional artists using their knowledge of artistic techniques. In this paper, we seek to understand how well these approaches can perform against today's modern generative models in both benign and adversarial settings. We curate real human art across 7 styles, generate matching images from 5 generative models, and apply 8 detectors (5 automated detectors and 3 different human groups including 180 crowdworkers, 4000+ professional artists, and 13 expert artists experienced at detecting AI). Both Hive and expert artists do very well, but make mistakes in different ways (Hive is weaker against adversarial perturbations while Expert artists produce higher false positives). We believe these weaknesses will remain as models continue to evolve, and use our data to demonstrate why a combined team of human and automated detectors provides the best combination of accuracy and robustness.
Related papers
- Detecting Discrepancies Between AI-Generated and Natural Images Using Uncertainty [91.64626435585643]
We propose a novel approach for detecting AI-generated images by leveraging predictive uncertainty to mitigate misuse and associated risks.
The motivation arises from the fundamental assumption regarding the distributional discrepancy between natural and AI-generated images.
We propose to leverage large-scale pre-trained models to calculate the uncertainty as the score for detecting AI-generated images.
arXiv Detail & Related papers (2024-12-08T11:32:25Z) - ArtBrain: An Explainable end-to-end Toolkit for Classification and Attribution of AI-Generated Art and Style [2.7321177315998915]
This paper introduces AI-ArtBench, a dataset featuring 185,015 artistic images across 10 art styles.
It includes 125,015 AI-generated images and 60,000 pieces of human-created artwork.
The accuracy of attribution to the generative model reaches 0.999.
arXiv Detail & Related papers (2024-12-02T14:03:50Z) - Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI [61.35083814817094]
Several protection tools against style mimicry have been developed that incorporate small adversarial perturbations into artworks published online.
We find that low-effort and "off-the-shelf" techniques, such as image upscaling, are sufficient to create robust mimicry methods that significantly degrade existing protections.
We caution that tools based on adversarial perturbations cannot reliably protect artists from the misuse of generative AI.
arXiv Detail & Related papers (2024-06-17T18:51:45Z) - How to Distinguish AI-Generated Images from Authentic Photographs [13.878791907839691]
Guide reveals five categories of artifacts and implausibilities that often appear in AI-generated images.
We generated 138 images with diffusion models, curated 9 images from social media, and curated 42 real photographs.
By drawing attention to these kinds of artifacts and implausibilities, we aim to better equip people to distinguish AI-generated images from real photographs.
arXiv Detail & Related papers (2024-06-12T21:23:27Z) - AI Art Neural Constellation: Revealing the Collective and Contrastive
State of AI-Generated and Human Art [36.21731898719347]
We conduct a comprehensive analysis to position AI-generated art within the context of human art heritage.
Our comparative analysis is based on an extensive dataset, dubbed ArtConstellation''
Key finding is that AI-generated artworks are visually related to the principle concepts for modern period art made in 1800-2000.
arXiv Detail & Related papers (2024-02-04T11:49:51Z) - DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning Detection [57.51313366337142]
There has been growing concern over the use of generative AI for malicious purposes.
In the realm of visual content synthesis using generative AI, key areas of significant concern has been image forgery and data poisoning.
We introduce the DeepfakeArt Challenge, a large-scale challenge benchmark dataset designed specifically to aid in the building of machine learning algorithms for generative AI art forgery and data poisoning detection.
arXiv Detail & Related papers (2023-06-02T05:11:27Z) - Learning to Evaluate the Artness of AI-generated Images [64.48229009396186]
ArtScore is a metric designed to evaluate the degree to which an image resembles authentic artworks by artists.
We employ pre-trained models for photo and artwork generation, resulting in a series of mixed models.
This dataset is then employed to train a neural network that learns to estimate quantized artness levels of arbitrary images.
arXiv Detail & Related papers (2023-05-08T17:58:27Z) - Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images [66.20578637253831]
There is a growing concern that the advancement of artificial intelligence (AI) technology may produce fake photos.
This study aims to comprehensively evaluate agents for distinguishing state-of-the-art AI-generated visual content.
arXiv Detail & Related papers (2023-04-25T17:51:59Z) - Everyone Can Be Picasso? A Computational Framework into the Myth of
Human versus AI Painting [8.031314357134795]
We develop a computational framework combining neural latent space and aesthetics features with visual analytics to investigate the difference between human and AI paintings.
We find that AI artworks show distributional difference from human artworks in both latent space and some aesthetic features like strokes and sharpness.
Our findings provide concrete evidence for the existing discrepancies between human and AI paintings and further suggest improvements of AI art with more consideration of aesthetics and human artists' involvement.
arXiv Detail & Related papers (2023-04-17T05:48:59Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.