CopyScope: Model-level Copyright Infringement Quantification in the
Diffusion Workflow
- URL: http://arxiv.org/abs/2311.12847v1
- Date: Fri, 13 Oct 2023 13:08:09 GMT
- Title: CopyScope: Model-level Copyright Infringement Quantification in the
Diffusion Workflow
- Authors: Junlei Zhou and Jiashi Gao and Ziwei Wang and Xuetao Wei
- Abstract summary: Copyright infringement quantification is the primary and challenging step towards AI-generated image copyright traceability.
We propose CopyScope, a new framework to quantify the infringement of AI-generated images from the model level.
- Score: 6.6282087165087304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Web-based AI image generation has become an innovative art form that can
generate novel artworks with the rapid development of the diffusion model.
However, this new technique brings potential copyright infringement risks as it
may incorporate the existing artworks without the owners' consent. Copyright
infringement quantification is the primary and challenging step towards
AI-generated image copyright traceability. Previous work only focused on data
attribution from the training data perspective, which is unsuitable for tracing
and quantifying copyright infringement in practice because of the following
reasons: (1) the training datasets are not always available in public; (2) the
model provider is the responsible party, not the image. Motivated by this, in
this paper, we propose CopyScope, a new framework to quantify the infringement
of AI-generated images from the model level. We first rigorously identify
pivotal components within the AI image generation pipeline. Then, we propose to
take advantage of Fr\'echet Inception Distance (FID) to effectively capture the
image similarity that fits human perception naturally. We further propose the
FID-based Shapley algorithm to evaluate the infringement contribution among
models. Extensive experiments demonstrate that our work not only reveals the
intricacies of infringement quantification but also effectively depicts the
infringing models quantitatively, thus promoting accountability in AI
image-generation tasks.
Related papers
- Copyright-Aware Incentive Scheme for Generative Art Models Using Hierarchical Reinforcement Learning [42.63462923848866]
We introduce a novel copyright metric grounded in copyright law and court precedents on infringement.
We then employ the TRAK method to estimate the contribution of data holders.
We design a hierarchical budget allocation method based on reinforcement learning to determine the budget for each round and the remuneration of the data holder.
arXiv Detail & Related papers (2024-10-26T13:29:43Z) - RLCP: A Reinforcement Learning-based Copyright Protection Method for Text-to-Image Diffusion Model [42.77851688874563]
We propose a Reinforcement Learning-based Copyright Protection(RLCP) method for Text-to-Image Diffusion Model.
Our approach minimizes the generation of copyright-infringing content while maintaining the quality of the model-generated dataset.
arXiv Detail & Related papers (2024-08-29T15:39:33Z) - Safeguard Text-to-Image Diffusion Models with Human Feedback Inversion [51.931083971448885]
We propose a framework named Human Feedback Inversion (HFI), where human feedback on model-generated images is condensed into textual tokens guiding the mitigation or removal of problematic images.
Our experimental results demonstrate our framework significantly reduces objectionable content generation while preserving image quality, contributing to the ethical deployment of AI in the public sphere.
arXiv Detail & Related papers (2024-07-17T05:21:41Z) - EnTruth: Enhancing the Traceability of Unauthorized Dataset Usage in Text-to-image Diffusion Models with Minimal and Robust Alterations [73.94175015918059]
We introduce a novel approach, EnTruth, which Enhances Traceability of unauthorized dataset usage.
By strategically incorporating the template memorization, EnTruth can trigger the specific behavior in unauthorized models as the evidence of infringement.
Our method is the first to investigate the positive application of memorization and use it for copyright protection, which turns a curse into a blessing.
arXiv Detail & Related papers (2024-06-20T02:02:44Z) - Protect-Your-IP: Scalable Source-Tracing and Attribution against Personalized Generation [19.250673262185767]
We propose a unified approach for image copyright source-tracing and attribution.
We introduce an innovative watermarking-attribution method that blends proactive and passive strategies.
We have conducted experiments using various celebrity portrait series sourced online.
arXiv Detail & Related papers (2024-05-26T15:14:54Z) - A Dataset and Benchmark for Copyright Infringement Unlearning from Text-to-Image Diffusion Models [52.49582606341111]
Copyright law confers creators the exclusive rights to reproduce, distribute, and monetize their creative works.
Recent progress in text-to-image generation has introduced formidable challenges to copyright enforcement.
We introduce a novel pipeline that harmonizes CLIP, ChatGPT, and diffusion models to curate a dataset.
arXiv Detail & Related papers (2024-01-04T11:14:01Z) - IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
Against Unauthorized Data Usage in Diffusion-Based Generative AI [52.90082445349903]
Diffusion-based image generation models can create artistic images that mimic the style of an artist or maliciously edit the original images for fake content.
Several attempts have been made to protect the original images from such unauthorized data usage by adding imperceptible perturbations.
In this work, we introduce a purification perturbation platform, named IMPRESS, to evaluate the effectiveness of imperceptible perturbations as a protective measure.
arXiv Detail & Related papers (2023-10-30T03:33:41Z) - DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models [79.71665540122498]
We propose a method for detecting unauthorized data usage by planting the injected content into the protected dataset.
Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions.
By analyzing whether the model has memorized the injected content, we can detect models that had illegally utilized the unauthorized data.
arXiv Detail & Related papers (2023-07-06T16:27:39Z) - WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models [32.29120988096214]
This paper introduces a novel approach to model fingerprinting that assigns responsibility for the generated images.
Our method modifies generative models based on each user's unique digital fingerprint, imprinting a unique identifier onto the resultant content that can be traced back to the user.
arXiv Detail & Related papers (2023-06-07T19:44:14Z) - Ablating Concepts in Text-to-Image Diffusion Models [57.9371041022838]
Large-scale text-to-image diffusion models can generate high-fidelity images with powerful compositional ability.
These models are typically trained on an enormous amount of Internet data, often containing copyrighted material, licensed images, and personal photos.
We propose an efficient method of ablating concepts in the pretrained model, preventing the generation of a target concept.
arXiv Detail & Related papers (2023-03-23T17:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.