Detecting AI-generated Artwork
- URL: http://arxiv.org/abs/2504.07078v1
- Date: Wed, 09 Apr 2025 17:50:07 GMT
- Title: Detecting AI-generated Artwork
- Authors: Meien Li, Mark Stamp,
- Abstract summary: Recent improvements in generative AI have made it difficult for people to distinguish between human-generated and AI-generated art.<n>We consider the potential utility of various types of Machine Learning (ML) and Deep Learning (DL) models in distinguishing AI-generated artwork from human-generated art.
- Score: 1.3812010983144798
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The high efficiency and quality of artwork generated by Artificial Intelligence (AI) has created new concerns and challenges for human artists. In particular, recent improvements in generative AI have made it difficult for people to distinguish between human-generated and AI-generated art. In this research, we consider the potential utility of various types of Machine Learning (ML) and Deep Learning (DL) models in distinguishing AI-generated artwork from human-generated artwork. We focus on three challenging artistic styles, namely, baroque, cubism, and expressionism. The learning models we test are Logistic Regression (LR), Support Vector Machine (SVM), Multilayer Perceptron (MLP), and Convolutional Neural Network (CNN). Our best experimental results yield a multiclass accuracy of 0.8208 over six classes, and an impressive accuracy of 0.9758 for the binary classification problem of distinguishing AI-generated from human-generated art.
Related papers
- LMM4Gen3DHF: Benchmarking and Evaluating Multimodal 3D Human Face Generation with LMMs [48.534851709853534]
We propose LMME3DHF as a metric for evaluating 3DHF capable of quality and authenticity score prediction, distortion-aware visual question answering, and distortion-aware saliency prediction.
Experimental results show that LMME3DHF achieves state-of-the-art performance, surpassing existing methods in both accurately predicting quality scores for AI-generated 3D human faces.
arXiv Detail & Related papers (2025-04-29T07:00:06Z) - ArtBrain: An Explainable end-to-end Toolkit for Classification and Attribution of AI-Generated Art and Style [2.7321177315998915]
This paper introduces AI-ArtBench, a dataset featuring 185,015 artistic images across 10 art styles.<n>It includes 125,015 AI-generated images and 60,000 pieces of human-created artwork.<n>The accuracy of attribution to the generative model reaches 0.999.
arXiv Detail & Related papers (2024-12-02T14:03:50Z) - A Sanity Check for AI-generated Image Detection [49.08585395873425]
We propose AIDE (AI-generated Image DEtector with Hybrid Features) to detect AI-generated images.<n>AIDE achieves +3.5% and +4.6% improvements to state-of-the-art methods.
arXiv Detail & Related papers (2024-06-27T17:59:49Z) - Organic or Diffused: Can We Distinguish Human Art from AI-generated Images? [24.417027069545117]
Distinguishing AI generated images from human art is a challenging problem.
A failure to address this problem allows bad actors to defraud individuals paying a premium for human art and companies whose stated policies forbid AI imagery.
We curate real human art across 7 styles, generate matching images from 5 generative models, and apply 8 detectors.
arXiv Detail & Related papers (2024-02-05T17:25:04Z) - AI Art Neural Constellation: Revealing the Collective and Contrastive
State of AI-Generated and Human Art [36.21731898719347]
We conduct a comprehensive analysis to position AI-generated art within the context of human art heritage.
Our comparative analysis is based on an extensive dataset, dubbed ArtConstellation''
Key finding is that AI-generated artworks are visually related to the principle concepts for modern period art made in 1800-2000.
arXiv Detail & Related papers (2024-02-04T11:49:51Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images [66.20578637253831]
There is a growing concern that the advancement of artificial intelligence (AI) technology may produce fake photos.
This study aims to comprehensively evaluate agents for distinguishing state-of-the-art AI-generated visual content.
arXiv Detail & Related papers (2023-04-25T17:51:59Z) - Human and AI Perceptual Differences in Image Classification Errors [13.045020949359621]
This study first analyzes the statistical distributions of mistakes from the two sources and then explores how task difficulty level affects these distributions.
We find that even when AI learns an excellent model from the training data, one that outperforms humans in overall accuracy, these AI models have significant and consistent differences from human perception.
arXiv Detail & Related papers (2023-04-18T05:09:07Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Cognitive Anthropomorphism of AI: How Humans and Computers Classify
Images [0.0]
Humans engage in cognitive anthropomorphism: expecting AI to have the same nature as human intelligence.
This mismatch presents an obstacle to appropriate human-AI interaction.
I offer three strategies for system design that can address the mismatch between human and AI classification.
arXiv Detail & Related papers (2020-02-07T21:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.