Protecting Intellectual Property of Generative Adversarial Networks from
Ambiguity Attack
- URL: http://arxiv.org/abs/2102.04362v1
- Date: Mon, 8 Feb 2021 17:12:20 GMT
- Title: Protecting Intellectual Property of Generative Adversarial Networks from
Ambiguity Attack
- Authors: Ding Sheng Ong, Chee Seng Chan, Kam Woh Ng, Lixin Fan, Qiang Yang
- Abstract summary: Generative Adrial Networks (GANs) which has been widely used to create photorealistic image are totally unprotected.
This paper presents a complete protection framework in both black-box and white-box settings to enforce Intellectual Property Right (IPR) protection on GANs.
- Score: 26.937702447957193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ever since Machine Learning as a Service (MLaaS) emerges as a viable business
that utilizes deep learning models to generate lucrative revenue, Intellectual
Property Right (IPR) has become a major concern because these deep learning
models can easily be replicated, shared, and re-distributed by any unauthorized
third parties. To the best of our knowledge, one of the prominent deep learning
models - Generative Adversarial Networks (GANs) which has been widely used to
create photorealistic image are totally unprotected despite the existence of
pioneering IPR protection methodology for Convolutional Neural Networks (CNNs).
This paper therefore presents a complete protection framework in both black-box
and white-box settings to enforce IPR protection on GANs. Empirically, we show
that the proposed method does not compromise the original GANs performance
(i.e. image generation, image super-resolution, style transfer), and at the
same time, it is able to withstand both removal and ambiguity attacks against
embedded watermarks.
Related papers
- Edge-Only Universal Adversarial Attacks in Distributed Learning [49.546479320670464]
In this work, we explore the feasibility of generating universal adversarial attacks when an attacker has access to the edge part of the model only.
Our approach shows that adversaries can induce effective mispredictions in the unknown cloud part by leveraging key features on the edge side.
Our results on ImageNet demonstrate strong attack transferability to the unknown cloud part.
arXiv Detail & Related papers (2024-11-15T11:06:24Z) - Protect-Your-IP: Scalable Source-Tracing and Attribution against Personalized Generation [19.250673262185767]
We propose a unified approach for image copyright source-tracing and attribution.
We introduce an innovative watermarking-attribution method that blends proactive and passive strategies.
We have conducted experiments using various celebrity portrait series sourced online.
arXiv Detail & Related papers (2024-05-26T15:14:54Z) - ModelShield: Adaptive and Robust Watermark against Model Extraction Attack [58.46326901858431]
Large language models (LLMs) demonstrate general intelligence across a variety of machine learning tasks.
adversaries can still utilize model extraction attacks to steal the model intelligence encoded in model generation.
Watermarking technology offers a promising solution for defending against such attacks by embedding unique identifiers into the model-generated content.
arXiv Detail & Related papers (2024-05-03T06:41:48Z) - IPR-NeRF: Ownership Verification meets Neural Radiance Field [100.76162575686368]
This paper proposes a comprehensive intellectual property (IP) protection framework for the NeRF model in both black-box and white-box settings.
In the black-box setting, a diffusion-based solution is introduced to embed and extract the watermark.
In the white-box setting, a designated digital signature is embedded into the weights of the NeRF model by adopting the sign loss objective.
arXiv Detail & Related papers (2024-01-17T01:33:40Z) - Ownership Protection of Generative Adversarial Networks [9.355840335132124]
Generative adversarial networks (GANs) have shown remarkable success in image synthesis.
It is critical to technically protect the intellectual property of GANs.
We propose a new ownership protection method based on the common characteristics of a target model and its stolen models.
arXiv Detail & Related papers (2023-06-08T14:31:58Z) - Copyright Protection and Accountability of Generative AI:Attack,
Watermarking and Attribution [7.0159295162418385]
We propose an evaluation framework to provide a comprehensive overview of the current state of the copyright protection measures for GANs.
Our findings indicate that the current intellectual property protection methods for input images, model watermarking, and attribution networks are largely satisfactory for a wide range of GANs.
arXiv Detail & Related papers (2023-03-15T06:40:57Z) - An Embarrassingly Simple Approach for Intellectual Property Rights
Protection on Recurrent Neural Networks [11.580808497808341]
This paper proposes a practical approach for the intellectual property protection on recurrent neural networks (RNNs)
We introduce the Gatekeeper concept that resembles that the recurrent nature in RNN architecture to embed keys.
Our protection scheme is robust and effective against ambiguity and removal attacks in both white-box and black-box protection schemes.
arXiv Detail & Related papers (2022-10-03T07:25:59Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Fingerprinting Image-to-Image Generative Adversarial Networks [53.02510603622128]
Generative Adversarial Networks (GANs) have been widely used in various application scenarios.
This paper presents a novel fingerprinting scheme for the Intellectual Property protection of image-to-image GANs based on a trusted third party.
arXiv Detail & Related papers (2021-06-19T06:25:10Z) - Deep Model Intellectual Property Protection via Deep Watermarking [122.87871873450014]
Deep neural networks are exposed to serious IP infringement risks.
Given a target deep model, if the attacker knows its full information, it can be easily stolen by fine-tuning.
We propose a new model watermarking framework for protecting deep networks trained for low-level computer vision or image processing tasks.
arXiv Detail & Related papers (2021-03-08T18:58:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.