Model Copyright Protection in Buyer-seller Environment
- URL: http://arxiv.org/abs/2312.05262v1
- Date: Tue, 5 Dec 2023 07:15:10 GMT
- Title: Model Copyright Protection in Buyer-seller Environment
- Authors: Yusheng Guo, Nan Zhong, Zhenxing Qian, Xinpeng Zhang
- Abstract summary: We propose a novel copyright protection scheme for a deep neural network (DNN) using an input-sensitive neural network (ISNN)
During the training phase, we add a specific perturbation to the clean images and mark them as legal inputs, while the other inputs are treated as illegal input.
Experimental results demonstrate that the proposed scheme is effective, valid, and secure.
- Score: 35.2914055333853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training a deep neural network (DNN) requires a high computational cost.
Buying models from sellers with a large number of computing resources has
become prevailing. However, the buyer-seller environment is not always trusted.
To protect the neural network models from leaking in an untrusted environment,
we propose a novel copyright protection scheme for DNN using an input-sensitive
neural network (ISNN). The main idea of ISNN is to make a DNN sensitive to the
key and copyright information. Therefore, only the buyer with a correct key can
utilize the ISNN. During the training phase, we add a specific perturbation to
the clean images and mark them as legal inputs, while the other inputs are
treated as illegal input. We design a loss function to make the outputs of
legal inputs close to the true ones, while the illegal inputs are far away from
true results. Experimental results demonstrate that the proposed scheme is
effective, valid, and secure.
Related papers
- Salted Inference: Enhancing Privacy while Maintaining Efficiency of
Split Inference in Mobile Computing [8.915849482780631]
In split inference, a deep neural network (DNN) is partitioned to run the early part of the DNN at the edge and the later part of the DNN in the cloud.
This meets two key requirements for on-device machine learning: input privacy and computation efficiency.
We introduce Salted DNNs: a novel approach that enables clients at the edge, who run the early part of the DNN, to control the semantic interpretation of the DNN's outputs at inference time.
arXiv Detail & Related papers (2023-10-20T09:53:55Z) - Deep Intellectual Property Protection: A Survey [70.98782484559408]
Deep Neural Networks (DNNs) have made revolutionary progress in recent years, and are widely used in various fields.
The goal of this paper is to provide a comprehensive survey of two mainstream DNN IP protection methods: deep watermarking and deep fingerprinting.
arXiv Detail & Related papers (2023-04-28T03:34:43Z) - An Embarrassingly Simple Approach for Intellectual Property Rights
Protection on Recurrent Neural Networks [11.580808497808341]
This paper proposes a practical approach for the intellectual property protection on recurrent neural networks (RNNs)
We introduce the Gatekeeper concept that resembles that the recurrent nature in RNN architecture to embed keys.
Our protection scheme is robust and effective against ambiguity and removal attacks in both white-box and black-box protection schemes.
arXiv Detail & Related papers (2022-10-03T07:25:59Z) - Robust and Lossless Fingerprinting of Deep Neural Networks via Pooled
Membership Inference [17.881686153284267]
Deep neural networks (DNNs) have already achieved great success in a lot of application areas and brought profound changes to our society.
How to protect the intellectual property (IP) of DNNs against infringement is one of the most important yet very challenging topics.
This paper proposes a novel technique called emphpooled membership inference (PMI) so as to protect the IP of the DNN models.
arXiv Detail & Related papers (2022-09-09T04:06:29Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - HufuNet: Embedding the Left Piece as Watermark and Keeping the Right
Piece for Ownership Verification in Deep Neural Networks [16.388046449021466]
We propose a novel solution for watermarking deep neural networks (DNNs)
HufuNet is highly robust against model fine-tuning/pruning, kernels cutoff/supplement, functionality-equivalent attack, and fraudulent ownership claims.
arXiv Detail & Related papers (2021-03-25T06:55:22Z) - Deep Serial Number: Computational Watermarking for DNN Intellectual
Property Protection [53.40245698216239]
DSN (Deep Serial Number) is a watermarking algorithm designed specifically for deep neural networks (DNNs)
Inspired by serial numbers in safeguarding conventional software IP, we propose the first implementation of serial number embedding within DNNs.
arXiv Detail & Related papers (2020-11-17T21:42:40Z) - Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged
Fraudsters [78.53851936180348]
We introduce two types of camouflages based on recent empirical studies, i.e., the feature camouflage and the relation camouflage.
Existing GNNs have not addressed these two camouflages, which results in their poor performance in fraud detection problems.
We propose a new model named CAmouflage-REsistant GNN (CARE-GNN) to enhance the GNN aggregation process with three unique modules against camouflages.
arXiv Detail & Related papers (2020-08-19T22:33:12Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.