Art-Free Generative Models: Art Creation Without Graphic Art Knowledge
- URL: http://arxiv.org/abs/2412.00176v1
- Date: Fri, 29 Nov 2024 18:59:01 GMT
- Title: Art-Free Generative Models: Art Creation Without Graphic Art Knowledge
- Authors: Hui Ren, Joanna Materzynska, Rohit Gandikota, David Bau, Antonio Torralba,
- Abstract summary: We propose a text-to-image generation model trained without access to art-related content.
We then introduce a simple yet effective method to learn an art adapter using only a few examples of selected artistic styles.
- Score: 50.60063523054282
- License:
- Abstract: We explore the question: "How much prior art knowledge is needed to create art?" To investigate this, we propose a text-to-image generation model trained without access to art-related content. We then introduce a simple yet effective method to learn an art adapter using only a few examples of selected artistic styles. Our experiments show that art generated using our method is perceived by users as comparable to art produced by models trained on large, art-rich datasets. Finally, through data attribution techniques, we illustrate how examples from both artistic and non-artistic datasets contributed to the creation of new artistic styles.
Related papers
- IntroStyle: Training-Free Introspective Style Attribution using Diffusion Features [89.95303251220734]
We present a training-free framework to solve the style attribution problem, using the features produced by a diffusion model alone.
This is denoted as introspective style attribution (IntroStyle) and demonstrates superior performance to state-of-the-art models for style retrieval.
We also introduce a synthetic dataset of Style Hacks (SHacks) to isolate artistic style and evaluate fine-grained style attribution performance.
arXiv Detail & Related papers (2024-12-19T01:21:23Z) - Rethinking Artistic Copyright Infringements in the Era of Text-to-Image Generative Models [47.19481598385283]
ArtSavant is a tool to determine the unique style of an artist by comparing it to a reference dataset of works from WikiArt.
We then perform a large-scale empirical study to provide quantitative insight on the prevalence of artistic style copying across 3 popular text-to-image generative models.
arXiv Detail & Related papers (2024-04-11T17:59:43Z) - AI Art Neural Constellation: Revealing the Collective and Contrastive
State of AI-Generated and Human Art [36.21731898719347]
We conduct a comprehensive analysis to position AI-generated art within the context of human art heritage.
Our comparative analysis is based on an extensive dataset, dubbed ArtConstellation''
Key finding is that AI-generated artworks are visually related to the principle concepts for modern period art made in 1800-2000.
arXiv Detail & Related papers (2024-02-04T11:49:51Z) - Inventing art styles with no artistic training data [0.65268245109828]
We propose two procedures to create painting styles using models trained only on natural images.
In the first procedure we use the inductive bias from the artistic medium to achieve creative expression.
The second procedure uses an additional natural image as inspiration to create a new style.
arXiv Detail & Related papers (2023-05-19T21:59:23Z) - Learning to Evaluate the Artness of AI-generated Images [64.48229009396186]
ArtScore is a metric designed to evaluate the degree to which an image resembles authentic artworks by artists.
We employ pre-trained models for photo and artwork generation, resulting in a series of mixed models.
This dataset is then employed to train a neural network that learns to estimate quantized artness levels of arbitrary images.
arXiv Detail & Related papers (2023-05-08T17:58:27Z) - Towards mapping the contemporary art world with ArtLM: an art-specific
NLP model [0.0]
We present a generic Natural Language Processing framework (called ArtLM) to discover the connections among contemporary artists based on their biographies.
With extensive experiments, we demonstrate that our ArtLM achieves 85.6% accuracy and 84.0% F1 score.
We also provide a visualisation and a qualitative analysis of the artist network built from ArtLM's outputs.
arXiv Detail & Related papers (2022-12-14T09:26:07Z) - Inversion-Based Style Transfer with Diffusion Models [78.93863016223858]
Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements.
We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image.
arXiv Detail & Related papers (2022-11-23T18:44:25Z) - Docent: A content-based recommendation system to discover contemporary
art [0.8782885374383763]
We present a content-based recommendation system on contemporary art relying on images of artworks and contextual metadata of artists.
We gathered and annotated artworks with advanced and art-specific information to create a unique database that was used to train our models.
After an assessment by a team of art specialists, we get an average final rating of 75% of meaningful artworks.
arXiv Detail & Related papers (2022-07-12T16:26:27Z) - Art Style Classification with Self-Trained Ensemble of AutoEncoding
Transformations [5.835728107167379]
Artistic style of a painting is a rich descriptor that reveals both visual and deep intrinsic knowledge about how an artist uniquely portrays and expresses their creative vision.
In this paper, we investigate the use of deep self-supervised learning methods to solve the problem of recognizing complex artistic styles with high intra-class and low inter-class variation.
arXiv Detail & Related papers (2020-12-06T21:05:23Z) - Modeling Artistic Workflows for Image Generation and Editing [83.43047077223947]
We propose a generative model that follows a given artistic workflow.
It enables both multi-stage image generation as well as multi-stage image editing of an existing piece of art.
arXiv Detail & Related papers (2020-07-14T17:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.