An Embarrassingly Simple Approach for Intellectual Property Rights
Protection on Recurrent Neural Networks
- URL: http://arxiv.org/abs/2210.00743v2
- Date: Tue, 4 Oct 2022 02:50:54 GMT
- Title: An Embarrassingly Simple Approach for Intellectual Property Rights
Protection on Recurrent Neural Networks
- Authors: Zhi Qin Tan, Hao Shan Wong, Chee Seng Chan
- Abstract summary: This paper proposes a practical approach for the intellectual property protection on recurrent neural networks (RNNs)
We introduce the Gatekeeper concept that resembles that the recurrent nature in RNN architecture to embed keys.
Our protection scheme is robust and effective against ambiguity and removal attacks in both white-box and black-box protection schemes.
- Score: 11.580808497808341
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Capitalise on deep learning models, offering Natural Language Processing
(NLP) solutions as a part of the Machine Learning as a Service (MLaaS) has
generated handsome revenues. At the same time, it is known that the creation of
these lucrative deep models is non-trivial. Therefore, protecting these
inventions intellectual property rights (IPR) from being abused, stolen and
plagiarized is vital. This paper proposes a practical approach for the IPR
protection on recurrent neural networks (RNN) without all the bells and
whistles of existing IPR solutions. Particularly, we introduce the Gatekeeper
concept that resembles the recurrent nature in RNN architecture to embed keys.
Also, we design the model training scheme in a way such that the protected RNN
model will retain its original performance iff a genuine key is presented.
Extensive experiments showed that our protection scheme is robust and effective
against ambiguity and removal attacks in both white-box and black-box
protection schemes on different RNN variants. Code is available at
https://github.com/zhiqin1998/RecurrentIPR
Related papers
- DNNShield: Embedding Identifiers for Deep Neural Network Ownership Verification [46.47446944218544]
This paper introduces DNNShield, a novel approach for protection of Deep Neural Networks (DNNs)
DNNShield embeds unique identifiers within the model architecture using specialized protection layers.
We validate the effectiveness and efficiency of DNNShield through extensive evaluations across three datasets and four model architectures.
arXiv Detail & Related papers (2024-03-11T10:27:36Z) - Reconstructive Neuron Pruning for Backdoor Defense [96.21882565556072]
We propose a novel defense called emphReconstructive Neuron Pruning (RNP) to expose and prune backdoor neurons.
In RNP, unlearning is operated at the neuron level while recovering is operated at the filter level, forming an asymmetric reconstructive learning procedure.
We show that such an asymmetric process on only a few clean samples can effectively expose and prune the backdoor neurons implanted by a wide range of attacks.
arXiv Detail & Related papers (2023-05-24T08:29:30Z) - Deep Intellectual Property Protection: A Survey [70.98782484559408]
Deep Neural Networks (DNNs) have made revolutionary progress in recent years, and are widely used in various fields.
The goal of this paper is to provide a comprehensive survey of two mainstream DNN IP protection methods: deep watermarking and deep fingerprinting.
arXiv Detail & Related papers (2023-04-28T03:34:43Z) - Backdoor Defense via Suppressing Model Shortcuts [91.30995749139012]
In this paper, we explore the backdoor mechanism from the angle of the model structure.
We demonstrate that the attack success rate (ASR) decreases significantly when reducing the outputs of some key skip connections.
arXiv Detail & Related papers (2022-11-02T15:39:19Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - PCPT and ACPT: Copyright Protection and Traceability Scheme for DNN
Models [13.043683635373213]
Deep neural networks (DNNs) have achieved tremendous success in artificial intelligence (AI) fields.
DNN models can be easily illegally copied, redistributed, or abused by criminals.
arXiv Detail & Related papers (2022-06-06T12:12:47Z) - Preventing Distillation-based Attacks on Neural Network IP [0.9558392439655015]
Neural networks (NNs) are already deployed in hardware today, becoming valuable intellectual property (IP) as many hours are invested in their training and optimization.
This paper proposes an intuitive method to poison the predictions that prevent distillation-based attacks.
The proposed technique obfuscates a NN so an attacker cannot train the NN entirely or accurately.
arXiv Detail & Related papers (2022-04-01T08:53:57Z) - Fingerprinting Image-to-Image Generative Adversarial Networks [53.02510603622128]
Generative Adversarial Networks (GANs) have been widely used in various application scenarios.
This paper presents a novel fingerprinting scheme for the Intellectual Property protection of image-to-image GANs based on a trusted third party.
arXiv Detail & Related papers (2021-06-19T06:25:10Z) - HufuNet: Embedding the Left Piece as Watermark and Keeping the Right
Piece for Ownership Verification in Deep Neural Networks [16.388046449021466]
We propose a novel solution for watermarking deep neural networks (DNNs)
HufuNet is highly robust against model fine-tuning/pruning, kernels cutoff/supplement, functionality-equivalent attack, and fraudulent ownership claims.
arXiv Detail & Related papers (2021-03-25T06:55:22Z) - Protecting Intellectual Property of Generative Adversarial Networks from
Ambiguity Attack [26.937702447957193]
Generative Adrial Networks (GANs) which has been widely used to create photorealistic image are totally unprotected.
This paper presents a complete protection framework in both black-box and white-box settings to enforce Intellectual Property Right (IPR) protection on GANs.
arXiv Detail & Related papers (2021-02-08T17:12:20Z) - Deep-Lock: Secure Authorization for Deep Neural Networks [9.0579592131111]
Deep Neural Network (DNN) models are considered valuable Intellectual Properties (IP) in several business models.
Prevention of IP theft and unauthorized usage of such DNN models has been raised as of significant concern by industry.
We propose a generic and lightweight key-based model-locking scheme, which ensures that a locked model functions correctly only upon applying the correct secret key.
arXiv Detail & Related papers (2020-08-13T15:22:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.