Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models
- URL: http://arxiv.org/abs/2302.04222v6
- Date: Sat, 05 Apr 2025 20:24:19 GMT
- Title: Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models
- Authors: Shawn Shan, Jenna Cryan, Emily Wenger, Haitao Zheng, Rana Hanocka, Ben Y. Zhao,
- Abstract summary: Glaze is a tool that enables artists to apply "style cloaks" to their art before sharing online.<n>These cloaks apply barely perceptible perturbations to images, and when used as training data, mislead generative models that try to mimic a specific artist.
- Score: 38.92567577109414
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent text-to-image diffusion models such as MidJourney and Stable Diffusion threaten to displace many in the professional artist community. In particular, models can learn to mimic the artistic style of specific artists after "fine-tuning" on samples of their art. In this paper, we describe the design, implementation and evaluation of Glaze, a tool that enables artists to apply "style cloaks" to their art before sharing online. These cloaks apply barely perceptible perturbations to images, and when used as training data, mislead generative models that try to mimic a specific artist. In coordination with the professional artist community, we deploy user studies to more than 1000 artists, assessing their views of AI art, as well as the efficacy of our tool, its usability and tolerability of perturbations, and robustness across different scenarios and against adaptive countermeasures. Both surveyed artists and empirical CLIP-based scores show that even at low perturbation levels (p=0.05), Glaze is highly successful at disrupting mimicry under normal conditions (>92%) and against adaptive countermeasures (>85%).
Related papers
- ArtistAuditor: Auditing Artist Style Pirate in Text-to-Image Generation Models [61.55816738318699]
We propose a novel method for data-use auditing in the text-to-image generation model.
ArtistAuditor employs a style extractor to obtain the multi-granularity style representations and treats artworks as samplings of an artist's style.
The experimental results on six combinations of models and datasets show that ArtistAuditor can achieve high AUC values.
arXiv Detail & Related papers (2025-04-17T16:15:38Z) - IntroStyle: Training-Free Introspective Style Attribution using Diffusion Features [89.95303251220734]
We present a training-free framework to solve the style attribution problem, using the features produced by a diffusion model alone.<n>This is denoted as introspective style attribution (IntroStyle) and demonstrates superior performance to state-of-the-art models for style retrieval.<n>We also introduce a synthetic dataset of Style Hacks (SHacks) to isolate artistic style and evaluate fine-grained style attribution performance.
arXiv Detail & Related papers (2024-12-19T01:21:23Z) - Rethinking Artistic Copyright Infringements in the Era of Text-to-Image Generative Models [47.19481598385283]
ArtSavant is a tool to determine the unique style of an artist by comparing it to a reference dataset of works from WikiArt.
We then perform a large-scale empirical study to provide quantitative insight on the prevalence of artistic style copying across 3 popular text-to-image generative models.
arXiv Detail & Related papers (2024-04-11T17:59:43Z) - Impressions: Understanding Visual Semiotics and Aesthetic Impact [66.40617566253404]
We present Impressions, a novel dataset through which to investigate the semiotics of images.
We show that existing multimodal image captioning and conditional generation models struggle to simulate plausible human responses to images.
This dataset significantly improves their ability to model impressions and aesthetic evaluations of images through fine-tuning and few-shot adaptation.
arXiv Detail & Related papers (2023-10-27T04:30:18Z) - Measuring the Success of Diffusion Models at Imitating Human Artists [7.007492782620398]
We show how to measure a model's ability to imitate specific artists.
We use Contrastive Language-Image Pretrained (CLIP) encoders to classify images in a zero-shot fashion.
We also show that a sample of the artist's work can be matched to these imitation images with a high degree of statistical reliability.
arXiv Detail & Related papers (2023-07-08T18:31:25Z) - ArtGPT-4: Towards Artistic-understanding Large Vision-Language Models with Enhanced Adapter [19.830089364830066]
ArtGPT-4 is a large vision-language model tailored to address the limitations of existing models in artistic comprehension.
It can render images with an artistic-understanding and convey the emotions they inspire, mirroring human interpretation.
arXiv Detail & Related papers (2023-05-12T14:04:30Z) - Learning to Evaluate the Artness of AI-generated Images [64.48229009396186]
ArtScore is a metric designed to evaluate the degree to which an image resembles authentic artworks by artists.
We employ pre-trained models for photo and artwork generation, resulting in a series of mixed models.
This dataset is then employed to train a neural network that learns to estimate quantized artness levels of arbitrary images.
arXiv Detail & Related papers (2023-05-08T17:58:27Z) - Quantifying Confounding Bias in Generative Art: A Case Study [3.198144010381572]
We propose a simple metric to quantify confounding bias due to the lack of modeling the influence of art movements in learning artists' styles.
The proposed metric is more effective than state-of-the-art outlier detection method in understanding the influence of art movements in artworks.
arXiv Detail & Related papers (2021-02-23T21:59:30Z) - Art Style Classification with Self-Trained Ensemble of AutoEncoding
Transformations [5.835728107167379]
Artistic style of a painting is a rich descriptor that reveals both visual and deep intrinsic knowledge about how an artist uniquely portrays and expresses their creative vision.
In this paper, we investigate the use of deep self-supervised learning methods to solve the problem of recognizing complex artistic styles with high intra-class and low inter-class variation.
arXiv Detail & Related papers (2020-12-06T21:05:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.