Rethinking Artistic Copyright Infringements in the Era of Text-to-Image Generative Models
- URL: http://arxiv.org/abs/2404.08030v1
- Date: Thu, 11 Apr 2024 17:59:43 GMT
- Title: Rethinking Artistic Copyright Infringements in the Era of Text-to-Image Generative Models
- Authors: Mazda Moayeri, Samyadeep Basu, Sriram Balasubramanian, Priyatham Kattakinda, Atoosa Chengini, Robert Brauneis, Soheil Feizi,
- Abstract summary: ArtSavant is a tool to determine the unique style of an artist by comparing it to a reference dataset of works from WikiArt.
We then perform a large-scale empirical study to provide quantitative insight on the prevalence of artistic style copying across 3 popular text-to-image generative models.
- Score: 47.19481598385283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent text-to-image generative models such as Stable Diffusion are extremely adept at mimicking and generating copyrighted content, raising concerns amongst artists that their unique styles may be improperly copied. Understanding how generative models copy "artistic style" is more complex than duplicating a single image, as style is comprised by a set of elements (or signature) that frequently co-occurs across a body of work, where each individual work may vary significantly. In our paper, we first reformulate the problem of "artistic copyright infringement" to a classification problem over image sets, instead of probing image-wise similarities. We then introduce ArtSavant, a practical (i.e., efficient and easy to understand) tool to (i) determine the unique style of an artist by comparing it to a reference dataset of works from 372 artists curated from WikiArt, and (ii) recognize if the identified style reappears in generated images. We leverage two complementary methods to perform artistic style classification over image sets, includingTagMatch, which is a novel inherently interpretable and attributable method, making it more suitable for broader use by non-technical stake holders (artists, lawyers, judges, etc). Leveraging ArtSavant, we then perform a large-scale empirical study to provide quantitative insight on the prevalence of artistic style copying across 3 popular text-to-image generative models. Namely, amongst a dataset of prolific artists (including many famous ones), only 20% of them appear to have their styles be at a risk of copying via simple prompting of today's popular text-to-image generative models.
Related papers
- IntroStyle: Training-Free Introspective Style Attribution using Diffusion Features [89.95303251220734]
We present a training-free framework to solve the style attribution problem, using the features produced by a diffusion model alone.
This is denoted as introspective style attribution (IntroStyle) and demonstrates superior performance to state-of-the-art models for style retrieval.
We also introduce a synthetic dataset of Style Hacks (SHacks) to isolate artistic style and evaluate fine-grained style attribution performance.
arXiv Detail & Related papers (2024-12-19T01:21:23Z) - Learning Artistic Signatures: Symmetry Discovery and Style Transfer [8.288443063900825]
There is no undisputed definition of artistic style.
Style should be thought of as a set of global symmetries that dictate the arrangement of local textures.
We show that by considering both local and global features, using both Lie generators and traditional measures of texture, we can quantitatively capture the stylistic similarity between artists better than with either set of features alone.
arXiv Detail & Related papers (2024-12-05T18:56:23Z) - Art-Free Generative Models: Art Creation Without Graphic Art Knowledge [50.60063523054282]
We propose a text-to-image generation model trained without access to art-related content.
We then introduce a simple yet effective method to learn an art adapter using only a few examples of selected artistic styles.
arXiv Detail & Related papers (2024-11-29T18:59:01Z) - FedStyle: Style-Based Federated Learning Crowdsourcing Framework for Art Commissions [3.1676484382068315]
FedStyle is a style-based federated learning crowdsourcing framework.
It allows artists to train local style models and share model parameters rather than artworks for collaboration.
It addresses extreme data heterogeneity by having artists learn their abstract style representations and align with the server.
arXiv Detail & Related papers (2024-04-25T04:53:43Z) - Measuring Style Similarity in Diffusion Models [118.22433042873136]
We present a framework for understanding and extracting style descriptors from images.
Our framework comprises a new dataset curated using the insight that style is a subjective property of an image.
We also propose a method to extract style attribute descriptors that can be used to style of a generated image to the images used in the training dataset of a text-to-image model.
arXiv Detail & Related papers (2024-04-01T17:58:30Z) - A Dataset and Benchmark for Copyright Infringement Unlearning from Text-to-Image Diffusion Models [52.49582606341111]
Copyright law confers creators the exclusive rights to reproduce, distribute, and monetize their creative works.
Recent progress in text-to-image generation has introduced formidable challenges to copyright enforcement.
We introduce a novel pipeline that harmonizes CLIP, ChatGPT, and diffusion models to curate a dataset.
arXiv Detail & Related papers (2024-01-04T11:14:01Z) - Measuring the Success of Diffusion Models at Imitating Human Artists [7.007492782620398]
We show how to measure a model's ability to imitate specific artists.
We use Contrastive Language-Image Pretrained (CLIP) encoders to classify images in a zero-shot fashion.
We also show that a sample of the artist's work can be matched to these imitation images with a high degree of statistical reliability.
arXiv Detail & Related papers (2023-07-08T18:31:25Z) - Inversion-Based Style Transfer with Diffusion Models [78.93863016223858]
Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements.
We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image.
arXiv Detail & Related papers (2022-11-23T18:44:25Z) - Demographic Influences on Contemporary Art with Unsupervised Style
Embeddings [25.107166631583212]
contempArt is a collection of paintings and drawings, a detailed graph network based on social connections on Instagram and additional socio-demographic information.
We evaluate three methods suited for generating unsupervised style embeddings of images and correlate them with the remaining data.
arXiv Detail & Related papers (2020-09-30T10:13:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.