StyleSentinel: Reliable Artistic Copyright Verification via Stylistic Fingerprints
- URL: http://arxiv.org/abs/2508.01335v1
- Date: Sat, 02 Aug 2025 12:04:52 GMT
- Title: StyleSentinel: Reliable Artistic Copyright Verification via Stylistic Fingerprints
- Authors: Lingxiao Chen, Liqin Wang, Wei Lu,
- Abstract summary: StyleSentinel is an approach for copyright protection of artwork by verifying an inherent stylistic fingerprint in the artist's artwork.<n>We employ a semantic self-reconstruction process to enhance stylistic expressiveness within the artwork.<n>We adaptively fuse multi-layer image features to encode abstract artistic style into a compact stylistic fingerprint.
- Score: 5.457996001307646
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The versatility of diffusion models in generating customized images has led to unauthorized usage of personal artwork, which poses a significant threat to the intellectual property of artists. Existing approaches relying on embedding additional information, such as perturbations, watermarks, and backdoors, suffer from limited defensive capabilities and fail to protect artwork published online. In this paper, we propose StyleSentinel, an approach for copyright protection of artwork by verifying an inherent stylistic fingerprint in the artist's artwork. Specifically, we employ a semantic self-reconstruction process to enhance stylistic expressiveness within the artwork, which establishes a dense and style-consistent manifold foundation for feature learning. Subsequently, we adaptively fuse multi-layer image features to encode abstract artistic style into a compact stylistic fingerprint. Finally, we model the target artist's style as a minimal enclosing hypersphere boundary in the feature space, transforming complex copyright verification into a robust one-class learning task. Extensive experiments demonstrate that compared with the state-of-the-art, StyleSentinel achieves superior performance on the one-sample verification task. We also demonstrate the effectiveness through online platforms.
Related papers
- DFA-CON: A Contrastive Learning Approach for Detecting Copyright Infringement in DeepFake Art [0.9070719355999145]
This work introduces DFA-CON, a contrastive learning framework designed to detect copyright-infringing or forged AI-generated art.<n>DFA-CON learns a discriminative representation space, posing affinity among original artworks and their forged counterparts.
arXiv Detail & Related papers (2025-05-13T13:23:52Z) - ArtistAuditor: Auditing Artist Style Pirate in Text-to-Image Generation Models [61.55816738318699]
We propose a novel method for data-use auditing in the text-to-image generation model.<n>ArtistAuditor employs a style extractor to obtain the multi-granularity style representations and treats artworks as samplings of an artist's style.<n>The experimental results on six combinations of models and datasets show that ArtistAuditor can achieve high AUC values.
arXiv Detail & Related papers (2025-04-17T16:15:38Z) - IntroStyle: Training-Free Introspective Style Attribution using Diffusion Features [89.95303251220734]
We present a training-free framework to solve the style attribution problem, using the features produced by a diffusion model alone.<n>This is denoted as introspective style attribution (IntroStyle) and demonstrates superior performance to state-of-the-art models for style retrieval.<n>We also introduce a synthetic dataset of Style Hacks (SHacks) to isolate artistic style and evaluate fine-grained style attribution performance.
arXiv Detail & Related papers (2024-12-19T01:21:23Z) - FedStyle: Style-Based Federated Learning Crowdsourcing Framework for Art Commissions [3.1676484382068315]
FedStyle is a style-based federated learning crowdsourcing framework.
It allows artists to train local style models and share model parameters rather than artworks for collaboration.
It addresses extreme data heterogeneity by having artists learn their abstract style representations and align with the server.
arXiv Detail & Related papers (2024-04-25T04:53:43Z) - Towards Highly Realistic Artistic Style Transfer via Stable Diffusion with Step-aware and Layer-aware Prompt [12.27693060663517]
Artistic style transfer aims to transfer the learned artistic style onto an arbitrary content image, generating artistic stylized images.
We propose a novel pre-trained diffusion-based artistic style transfer method, called LSAST.
Our proposed method can generate more highly realistic artistic stylized images than the state-of-the-art artistic style transfer methods.
arXiv Detail & Related papers (2024-04-17T15:28:53Z) - Rethinking Artistic Copyright Infringements in the Era of Text-to-Image Generative Models [47.19481598385283]
ArtSavant is a tool to determine the unique style of an artist by comparing it to a reference dataset of works from WikiArt.
We then perform a large-scale empirical study to provide quantitative insight on the prevalence of artistic style copying across 3 popular text-to-image generative models.
arXiv Detail & Related papers (2024-04-11T17:59:43Z) - CreativeSynth: Cross-Art-Attention for Artistic Image Synthesis with Multimodal Diffusion [73.08710648258985]
Key painting attributes including layout, perspective, shape, and semantics often cannot be conveyed and expressed through style transfer.<n>Large-scale pretrained text-to-image generation models have demonstrated their capability to synthesize a vast amount of high-quality images.<n>Our main novel idea is to integrate multimodal semantic information as a synthesis guide into artworks, rather than transferring style to the real world.
arXiv Detail & Related papers (2024-01-25T10:42:09Z) - ALADIN-NST: Self-supervised disentangled representation learning of
artistic style through Neural Style Transfer [60.6863849241972]
We learn a representation of visual artistic style more strongly disentangled from the semantic content depicted in an image.
We show that strongly addressing the disentanglement of style and content leads to large gains in style-specific metrics.
arXiv Detail & Related papers (2023-04-12T10:33:18Z) - QuantArt: Quantizing Image Style Transfer Towards High Visual Fidelity [94.5479418998225]
We propose a new style transfer framework called QuantArt for high visual-fidelity stylization.
Our framework achieves significantly higher visual fidelity compared with the existing style transfer methods.
arXiv Detail & Related papers (2022-12-20T17:09:53Z) - AesUST: Towards Aesthetic-Enhanced Universal Style Transfer [15.078430702469886]
AesUST is a novel Aesthetic-enhanced Universal Style Transfer approach.
We introduce an aesthetic discriminator to learn the universal human-delightful aesthetic features from a large corpus of artist-created paintings.
We also develop a new two-stage transfer training strategy with two aesthetic regularizations to train our model more effectively.
arXiv Detail & Related papers (2022-08-27T13:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.