A Plug-and-Play Defensive Perturbation for Copyright Protection of
DNN-based Applications
- URL: http://arxiv.org/abs/2304.10679v2
- Date: Fri, 19 May 2023 00:28:22 GMT
- Title: A Plug-and-Play Defensive Perturbation for Copyright Protection of
DNN-based Applications
- Authors: Donghua Wang, Wen Yao, Tingsong Jiang, Weien Zhou, Lang Lin, and
Xiaoqian Chen
- Abstract summary: We propose a plug-and-play invisible copyright protection method based on defensive perturbation for DNN-based applications (i.e., style transfer)
We project the copyright information to the defensive perturbation with the designed copyright encoder, which is added to the image to be protected.
Then, we extract the copyright information from the encoded copyrighted image with the devised copyright decoder.
- Score: 1.4226119891617357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wide deployment of deep neural networks (DNNs) based applications (e.g.,
style transfer, cartoonish), stimulating the requirement of copyright
protection of such application's production. Although some traditional visible
copyright techniques are available, they would introduce undesired traces and
result in a poor user experience. In this paper, we propose a novel
plug-and-play invisible copyright protection method based on defensive
perturbation for DNN-based applications (i.e., style transfer). Rather than
apply the perturbation to attack the DNNs model, we explore the potential
utilization of perturbation in copyright protection. Specifically, we project
the copyright information to the defensive perturbation with the designed
copyright encoder, which is added to the image to be protected. Then, we
extract the copyright information from the encoded copyrighted image with the
devised copyright decoder. Furthermore, we use a robustness module to
strengthen the decoding capability of the decoder toward images with various
distortions (e.g., JPEG compression), which may be occurred when the user posts
the image on social media. To ensure the image quality of encoded images and
decoded copyright images, a loss function was elaborately devised. Objective
and subjective experiment results demonstrate the effectiveness of the proposed
method. We have also conducted physical world tests on social media (i.e.,
Wechat and Twitter) by posting encoded copyright images. The results show that
the copyright information in the encoded image saved from social media can
still be correctly extracted.
Related papers
- RLCP: A Reinforcement Learning-based Copyright Protection Method for Text-to-Image Diffusion Model [42.77851688874563]
We propose a Reinforcement Learning-based Copyright Protection(RLCP) method for Text-to-Image Diffusion Model.
Our approach minimizes the generation of copyright-infringing content while maintaining the quality of the model-generated dataset.
arXiv Detail & Related papers (2024-08-29T15:39:33Z) - ©Plug-in Authorization for Human Content Copyright Protection in Text-to-Image Model [71.47762442337948]
State-of-the-art models create high-quality content without crediting original creators.
We propose the copyright Plug-in Authorization framework, introducing three operations: addition, extraction, and combination.
Extraction allows creators to reclaim copyright from infringing models, and combination enables users to merge different copyright plug-ins.
arXiv Detail & Related papers (2024-04-18T07:48:00Z) - Copyright Protection in Generative AI: A Technical Perspective [58.84343394349887]
Generative AI has witnessed rapid advancement in recent years, expanding their capabilities to create synthesized content such as text, images, audio, and code.
The high fidelity and authenticity of contents generated by these Deep Generative Models (DGMs) have sparked significant copyright concerns.
This work delves into this issue by providing a comprehensive overview of copyright protection from a technical perspective.
arXiv Detail & Related papers (2024-02-04T04:00:33Z) - A Dataset and Benchmark for Copyright Infringement Unlearning from Text-to-Image Diffusion Models [52.49582606341111]
Copyright law confers creators the exclusive rights to reproduce, distribute, and monetize their creative works.
Recent progress in text-to-image generation has introduced formidable challenges to copyright enforcement.
We introduce a novel pipeline that harmonizes CLIP, ChatGPT, and diffusion models to curate a dataset.
arXiv Detail & Related papers (2024-01-04T11:14:01Z) - EditGuard: Versatile Image Watermarking for Tamper Localization and
Copyright Protection [19.140822655858873]
We propose a proactive forensics framework EditGuard to unify copyright protection and tamper-agnostic localization.
It can offer a meticulous embedding of imperceptible watermarks and precise decoding of tampered areas and copyright information.
Our experiments demonstrate that EditGuard balances the tamper localization accuracy, copyright recovery precision, and generalizability to various AIGC-based tampering methods.
arXiv Detail & Related papers (2023-12-12T15:41:24Z) - IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
Against Unauthorized Data Usage in Diffusion-Based Generative AI [52.90082445349903]
Diffusion-based image generation models can create artistic images that mimic the style of an artist or maliciously edit the original images for fake content.
Several attempts have been made to protect the original images from such unauthorized data usage by adding imperceptible perturbations.
In this work, we introduce a purification perturbation platform, named IMPRESS, to evaluate the effectiveness of imperceptible perturbations as a protective measure.
arXiv Detail & Related papers (2023-10-30T03:33:41Z) - FT-Shield: A Watermark Against Unauthorized Fine-tuning in Text-to-Image Diffusion Models [64.89896692649589]
We propose FT-Shield, a watermarking solution tailored for the fine-tuning of text-to-image diffusion models.
FT-Shield addresses copyright protection challenges by designing new watermark generation and detection strategies.
arXiv Detail & Related papers (2023-10-03T19:50:08Z) - Can Copyright be Reduced to Privacy? [23.639303165101385]
We argue that while algorithmic stability may be perceived as a practical tool to detect copying, such copying does not necessarily constitute copyright infringement.
If adopted as a standard for detecting an establishing copyright infringement, algorithmic stability may undermine the intended objectives of copyright law.
arXiv Detail & Related papers (2023-05-24T07:22:41Z) - Protecting the Intellectual Properties of Deep Neural Networks with an
Additional Class and Steganographic Images [7.234511676697502]
We propose a method to protect the intellectual properties of deep neural networks (DNN) models by using an additional class and steganographic images.
We adopt the least significant bit (LSB) image steganography to embed users' fingerprints into watermark key images.
On Fashion-MNIST and CIFAR-10 datasets, the proposed method can obtain 100% watermark accuracy and 100% fingerprint authentication success rate.
arXiv Detail & Related papers (2021-04-19T11:03:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.