Not Only Generative Art: Stable Diffusion for Content-Style
Disentanglement in Art Analysis
- URL: http://arxiv.org/abs/2304.10278v1
- Date: Thu, 20 Apr 2023 13:00:46 GMT
- Title: Not Only Generative Art: Stable Diffusion for Content-Style
Disentanglement in Art Analysis
- Authors: Yankun Wu, Yuta Nakashima, Noa Garcia
- Abstract summary: GOYA is a method that distills the artistic knowledge captured in a recent generative model to disentangle content and style.
Experiments show that synthetically generated images sufficiently serve as a proxy of the real distribution of artworks.
- Score: 23.388338598125195
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The duality of content and style is inherent to the nature of art. For
humans, these two elements are clearly different: content refers to the objects
and concepts in the piece of art, and style to the way it is expressed. This
duality poses an important challenge for computer vision. The visual appearance
of objects and concepts is modulated by the style that may reflect the author's
emotions, social trends, artistic movement, etc., and their deep comprehension
undoubtfully requires to handle both. A promising step towards a general
paradigm for art analysis is to disentangle content and style, whereas relying
on human annotations to cull a single aspect of artworks has limitations in
learning semantic concepts and the visual appearance of paintings. We thus
present GOYA, a method that distills the artistic knowledge captured in a
recent generative model to disentangle content and style. Experiments show that
synthetically generated images sufficiently serve as a proxy of the real
distribution of artworks, allowing GOYA to separately represent the two
elements of art while keeping more information than existing methods.
Related papers
- Content-style disentangled representation for controllable artistic image stylization and generation [0.0]
Controllable artistic image stylization and generation aims to render the content provided by text or image with the learned artistic style.
This paper proposes a content-style representation disentangling method for controllable artistic image stylization and generation.
arXiv Detail & Related papers (2024-12-19T03:42:58Z) - IntroStyle: Training-Free Introspective Style Attribution using Diffusion Features [89.95303251220734]
We present a training-free framework to solve the style attribution problem, using the features produced by a diffusion model alone.
This is denoted as introspective style attribution (IntroStyle) and demonstrates superior performance to state-of-the-art models for style retrieval.
We also introduce a synthetic dataset of Style Hacks (SHacks) to isolate artistic style and evaluate fine-grained style attribution performance.
arXiv Detail & Related papers (2024-12-19T01:21:23Z) - Art-Free Generative Models: Art Creation Without Graphic Art Knowledge [50.60063523054282]
We propose a text-to-image generation model trained without access to art-related content.
We then introduce a simple yet effective method to learn an art adapter using only a few examples of selected artistic styles.
arXiv Detail & Related papers (2024-11-29T18:59:01Z) - AI Art Neural Constellation: Revealing the Collective and Contrastive
State of AI-Generated and Human Art [36.21731898719347]
We conduct a comprehensive analysis to position AI-generated art within the context of human art heritage.
Our comparative analysis is based on an extensive dataset, dubbed ArtConstellation''
Key finding is that AI-generated artworks are visually related to the principle concepts for modern period art made in 1800-2000.
arXiv Detail & Related papers (2024-02-04T11:49:51Z) - Impressions: Understanding Visual Semiotics and Aesthetic Impact [66.40617566253404]
We present Impressions, a novel dataset through which to investigate the semiotics of images.
We show that existing multimodal image captioning and conditional generation models struggle to simulate plausible human responses to images.
This dataset significantly improves their ability to model impressions and aesthetic evaluations of images through fine-tuning and few-shot adaptation.
arXiv Detail & Related papers (2023-10-27T04:30:18Z) - Text-to-Image Generation for Abstract Concepts [76.32278151607763]
We propose a framework of Text-to-Image generation for Abstract Concepts (TIAC)
The abstract concept is clarified into a clear intent with a detailed definition to avoid ambiguity.
The concept-dependent form is retrieved from an LLM-extracted form pattern set.
arXiv Detail & Related papers (2023-09-26T02:22:39Z) - Learning to Evaluate the Artness of AI-generated Images [64.48229009396186]
ArtScore is a metric designed to evaluate the degree to which an image resembles authentic artworks by artists.
We employ pre-trained models for photo and artwork generation, resulting in a series of mixed models.
This dataset is then employed to train a neural network that learns to estimate quantized artness levels of arbitrary images.
arXiv Detail & Related papers (2023-05-08T17:58:27Z) - Inversion-Based Style Transfer with Diffusion Models [78.93863016223858]
Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements.
We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image.
arXiv Detail & Related papers (2022-11-23T18:44:25Z) - Formal Analysis of Art: Proxy Learning of Visual Concepts from Style
Through Language Models [10.854399031287393]
We present a machine learning system that can quantify fine art paintings with a set of visual elements and principles of art.
We introduce a novel mechanism, called proxy learning, which learns visual concepts in paintings though their general relation to styles.
arXiv Detail & Related papers (2022-01-05T21:03:29Z) - Explain Me the Painting: Multi-Topic Knowledgeable Art Description
Generation [26.099306167995376]
This work presents a framework to bring art closer to people by generating comprehensive descriptions of fine-art paintings.
The framework is validated through an exhaustive analysis, both quantitative and qualitative, as well as a comparative human evaluation.
arXiv Detail & Related papers (2021-09-13T07:08:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.