A Protection Method of Trained CNN Model with Secret Key from
Unauthorized Access
- URL: http://arxiv.org/abs/2105.14756v1
- Date: Mon, 31 May 2021 07:37:33 GMT
- Title: A Protection Method of Trained CNN Model with Secret Key from
Unauthorized Access
- Authors: AprilPyone MaungMaung and Hitoshi Kiya
- Abstract summary: We propose a novel method for protecting convolutional neural network (CNN) models with a secret key set.
The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access.
- Score: 15.483078145498085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel method for protecting convolutional neural
network (CNN) models with a secret key set so that unauthorized users without
the correct key set cannot access trained models. The method enables us to
protect not only from copyright infringement but also the functionality of a
model from unauthorized access without any noticeable overhead. We introduce
three block-wise transformations with a secret key set to generate learnable
transformed images: pixel shuffling, negative/positive transformation, and FFX
encryption. Protected models are trained by using transformed images. The
results of experiments with the CIFAR and ImageNet datasets show that the
performance of a protected model was close to that of non-protected models when
the key set was correct, while the accuracy severely dropped when an incorrect
key set was given. The protected model was also demonstrated to be robust
against various attacks. Compared with the state-of-the-art model protection
with passports, the proposed method does not have any additional layers in the
network, and therefore, there is no overhead during training and inference
processes.
Related papers
- EnTruth: Enhancing the Traceability of Unauthorized Dataset Usage in Text-to-image Diffusion Models with Minimal and Robust Alterations [73.94175015918059]
We introduce a novel approach, EnTruth, which Enhances Traceability of unauthorized dataset usage.
By strategically incorporating the template memorization, EnTruth can trigger the specific behavior in unauthorized models as the evidence of infringement.
Our method is the first to investigate the positive application of memorization and use it for copyright protection, which turns a curse into a blessing.
arXiv Detail & Related papers (2024-06-20T02:02:44Z) - ModelLock: Locking Your Model With a Spell [90.36433941408536]
A diffusion-based framework dubbed ModelLock explores text-guided image editing to transform the training data into unique styles or add new objects in the background.
A model finetuned on this edited dataset will be locked and can only be unlocked by the key prompt, i.e., the text prompt used to transform the data.
We conduct extensive experiments on both image classification and segmentation tasks, and show that ModelLock can effectively lock the finetuned models without significantly reducing the expected performance.
arXiv Detail & Related papers (2024-05-25T15:52:34Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models [79.71665540122498]
We propose a method for detecting unauthorized data usage by planting the injected content into the protected dataset.
Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions.
By analyzing whether the model has memorized the injected content, we can detect models that had illegally utilized the unauthorized data.
arXiv Detail & Related papers (2023-07-06T16:27:39Z) - Access Control with Encrypted Feature Maps for Object Detection Models [10.925242558525683]
In this paper, we propose an access control method with a secret key for object detection models.
selected feature maps are encrypted with a secret key for training and testing models.
In an experiment, the protected models allowed authorized users to obtain almost the same performance as that of non-protected models.
arXiv Detail & Related papers (2022-09-29T14:46:04Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - An Encryption Method of ConvMixer Models without Performance Degradation [14.505867475659276]
We propose an encryption method for ConvMixer models with a secret key.
The effectiveness of the proposed method is evaluated in terms of classification accuracy and model protection.
arXiv Detail & Related papers (2022-07-25T07:09:16Z) - Protecting Semantic Segmentation Models by Using Block-wise Image
Encryption with Secret Key from Unauthorized Access [13.106063755117399]
We propose to protect semantic segmentation models from unauthorized access by utilizing block-wise transformation with a secret key.
Experiment results show that the proposed protection method allows rightful users with the correct key to access the model to full capacity and deteriorate the performance for unauthorized users.
arXiv Detail & Related papers (2021-07-20T09:31:15Z) - Transfer Learning-Based Model Protection With Secret Key [15.483078145498085]
We propose a novel method for protecting trained models with a secret key.
In experiments with the ImageNet dataset, it is shown that the performance of a protected model was close to that of a non-protected model when the correct key was given.
arXiv Detail & Related papers (2021-03-05T08:12:11Z) - Training DNN Model with Secret Key for Model Protection [17.551718914117917]
We propose a model protection method by using block-wise pixel shuffling with a secret key as a preprocessing technique to input images.
Experiment results show that the performance of the protected model is close to that of non-protected models when the key is correct.
arXiv Detail & Related papers (2020-08-06T04:25:59Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.