©Plug-in Authorization for Human Content Copyright Protection in Text-to-Image Model
- URL: http://arxiv.org/abs/2404.11962v1
- Date: Thu, 18 Apr 2024 07:48:00 GMT
- Title: ©Plug-in Authorization for Human Content Copyright Protection in Text-to-Image Model
- Authors: Chao Zhou, Huishuai Zhang, Jiang Bian, Weiming Zhang, Nenghai Yu,
- Abstract summary: State-of-the-art models create high-quality content without crediting original creators.
We propose the copyright Plug-in Authorization framework, introducing three operations: addition, extraction, and combination.
Extraction allows creators to reclaim copyright from infringing models, and combination enables users to merge different copyright plug-ins.
- Score: 71.47762442337948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the contentious issue of copyright infringement in images generated by text-to-image models, sparking debates among AI developers, content creators, and legal entities. State-of-the-art models create high-quality content without crediting original creators, causing concern in the artistic community. To mitigate this, we propose the \copyright Plug-in Authorization framework, introducing three operations: addition, extraction, and combination. Addition involves training a \copyright plug-in for specific copyright, facilitating proper credit attribution. Extraction allows creators to reclaim copyright from infringing models, and combination enables users to merge different \copyright plug-ins. These operations act as permits, incentivizing fair use and providing flexibility in authorization. We present innovative approaches,"Reverse LoRA" for extraction and "EasyMerge" for seamless combination. Experiments in artist-style replication and cartoon IP recreation demonstrate \copyright plug-ins' effectiveness, offering a valuable solution for human copyright protection in the age of generative AIs.
Related papers
- Copyright-Aware Incentive Scheme for Generative Art Models Using Hierarchical Reinforcement Learning [42.63462923848866]
We introduce a novel copyright metric grounded in copyright law and court precedents on infringement.
We then employ the TRAK method to estimate the contribution of data holders.
We design a hierarchical budget allocation method based on reinforcement learning to determine the budget for each round and the remuneration of the data holder.
arXiv Detail & Related papers (2024-10-26T13:29:43Z) - RLCP: A Reinforcement Learning-based Copyright Protection Method for Text-to-Image Diffusion Model [42.77851688874563]
We propose a Reinforcement Learning-based Copyright Protection(RLCP) method for Text-to-Image Diffusion Model.
Our approach minimizes the generation of copyright-infringing content while maintaining the quality of the model-generated dataset.
arXiv Detail & Related papers (2024-08-29T15:39:33Z) - Fantastic Copyrighted Beasts and How (Not) to Generate Them [83.77348858322523]
Copyrighted characters pose a difficult challenge for image generation services.
At least one lawsuit has been awarded damages based on the generation of these characters.
arXiv Detail & Related papers (2024-06-20T17:38:16Z) - Copyright Protection in Generative AI: A Technical Perspective [58.84343394349887]
Generative AI has witnessed rapid advancement in recent years, expanding their capabilities to create synthesized content such as text, images, audio, and code.
The high fidelity and authenticity of contents generated by these Deep Generative Models (DGMs) have sparked significant copyright concerns.
This work delves into this issue by providing a comprehensive overview of copyright protection from a technical perspective.
arXiv Detail & Related papers (2024-02-04T04:00:33Z) - A Dataset and Benchmark for Copyright Infringement Unlearning from Text-to-Image Diffusion Models [52.49582606341111]
Copyright law confers creators the exclusive rights to reproduce, distribute, and monetize their creative works.
Recent progress in text-to-image generation has introduced formidable challenges to copyright enforcement.
We introduce a novel pipeline that harmonizes CLIP, ChatGPT, and diffusion models to curate a dataset.
arXiv Detail & Related papers (2024-01-04T11:14:01Z) - CopyScope: Model-level Copyright Infringement Quantification in the
Diffusion Workflow [6.6282087165087304]
Copyright infringement quantification is the primary and challenging step towards AI-generated image copyright traceability.
We propose CopyScope, a new framework to quantify the infringement of AI-generated images from the model level.
arXiv Detail & Related papers (2023-10-13T13:08:09Z) - FT-Shield: A Watermark Against Unauthorized Fine-tuning in Text-to-Image Diffusion Models [64.89896692649589]
We propose FT-Shield, a watermarking solution tailored for the fine-tuning of text-to-image diffusion models.
FT-Shield addresses copyright protection challenges by designing new watermark generation and detection strategies.
arXiv Detail & Related papers (2023-10-03T19:50:08Z) - A Plug-and-Play Defensive Perturbation for Copyright Protection of
DNN-based Applications [1.4226119891617357]
We propose a plug-and-play invisible copyright protection method based on defensive perturbation for DNN-based applications (i.e., style transfer)
We project the copyright information to the defensive perturbation with the designed copyright encoder, which is added to the image to be protected.
Then, we extract the copyright information from the encoded copyrighted image with the devised copyright decoder.
arXiv Detail & Related papers (2023-04-20T23:57:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.