Transfer Learning-Based Model Protection With Secret Key
- URL: http://arxiv.org/abs/2103.03525v1
- Date: Fri, 5 Mar 2021 08:12:11 GMT
- Title: Transfer Learning-Based Model Protection With Secret Key
- Authors: MaungMaung AprilPyone and Hitoshi Kiya
- Abstract summary: We propose a novel method for protecting trained models with a secret key.
In experiments with the ImageNet dataset, it is shown that the performance of a protected model was close to that of a non-protected model when the correct key was given.
- Score: 15.483078145498085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel method for protecting trained models with a secret key so
that unauthorized users without the correct key cannot get the correct
inference. By taking advantage of transfer learning, the proposed method
enables us to train a large protected model like a model trained with ImageNet
by using a small subset of a training dataset. It utilizes a learnable
encryption step with a secret key to generate learnable transformed images.
Models with pre-trained weights are fine-tuned by using such transformed
images. In experiments with the ImageNet dataset, it is shown that the
performance of a protected model was close to that of a non-protected model
when the correct key was given, while the accuracy tremendously dropped when an
incorrect key was used. The protected model was also demonstrated to be robust
against key estimation attacks.
Related papers
- Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - EnTruth: Enhancing the Traceability of Unauthorized Dataset Usage in Text-to-image Diffusion Models with Minimal and Robust Alterations [73.94175015918059]
We introduce a novel approach, EnTruth, which Enhances Traceability of unauthorized dataset usage.
By strategically incorporating the template memorization, EnTruth can trigger the specific behavior in unauthorized models as the evidence of infringement.
Our method is the first to investigate the positive application of memorization and use it for copyright protection, which turns a curse into a blessing.
arXiv Detail & Related papers (2024-06-20T02:02:44Z) - ModelLock: Locking Your Model With a Spell [90.36433941408536]
A diffusion-based framework dubbed ModelLock explores text-guided image editing to transform the training data into unique styles or add new objects in the background.
A model finetuned on this edited dataset will be locked and can only be unlocked by the key prompt, i.e., the text prompt used to transform the data.
We conduct extensive experiments on both image classification and segmentation tasks, and show that ModelLock can effectively lock the finetuned models without significantly reducing the expected performance.
arXiv Detail & Related papers (2024-05-25T15:52:34Z) - Match me if you can: Semi-Supervised Semantic Correspondence Learning with Unpaired Images [76.47980643420375]
This paper builds on the hypothesis that there is an inherent data-hungry matter in learning semantic correspondences.
We demonstrate a simple machine annotator reliably enriches paired key points via machine supervision.
Our models surpass current state-of-the-art models on semantic correspondence learning benchmarks like SPair-71k, PF-PASCAL, and PF-WILLOW.
arXiv Detail & Related papers (2023-11-30T13:22:15Z) - Are You Stealing My Model? Sample Correlation for Fingerprinting Deep
Neural Networks [86.55317144826179]
Previous methods always leverage the transferable adversarial examples as the model fingerprint.
We propose a novel yet simple model stealing detection method based on SAmple Correlation (SAC)
SAC successfully defends against various model stealing attacks, even including adversarial training or transfer learning.
arXiv Detail & Related papers (2022-10-21T02:07:50Z) - An Access Control Method with Secret Key for Semantic Segmentation
Models [12.27887776401573]
A novel method for access control with a secret key is proposed to protect models from unauthorized access.
We focus on semantic segmentation models with the vision transformer (ViT), called segmentation transformer (SETR)
arXiv Detail & Related papers (2022-08-28T04:09:36Z) - An Encryption Method of ConvMixer Models without Performance Degradation [14.505867475659276]
We propose an encryption method for ConvMixer models with a secret key.
The effectiveness of the proposed method is evaluated in terms of classification accuracy and model protection.
arXiv Detail & Related papers (2022-07-25T07:09:16Z) - A Protection Method of Trained CNN Model Using Feature Maps Transformed
With Secret Key From Unauthorized Access [15.483078145498085]
We propose a model protection method for convolutional neural networks (CNNs) with a secret key.
The proposed method applies a block-wise transformation with a secret key to feature maps in the network.
arXiv Detail & Related papers (2021-09-01T07:47:05Z) - A Protection Method of Trained CNN Model with Secret Key from
Unauthorized Access [15.483078145498085]
We propose a novel method for protecting convolutional neural network (CNN) models with a secret key set.
The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access.
arXiv Detail & Related papers (2021-05-31T07:37:33Z) - Training DNN Model with Secret Key for Model Protection [17.551718914117917]
We propose a model protection method by using block-wise pixel shuffling with a secret key as a preprocessing technique to input images.
Experiment results show that the performance of the protected model is close to that of non-protected models when the key is correct.
arXiv Detail & Related papers (2020-08-06T04:25:59Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.