Training DNN Model with Secret Key for Model Protection
- URL: http://arxiv.org/abs/2008.02450v1
- Date: Thu, 6 Aug 2020 04:25:59 GMT
- Title: Training DNN Model with Secret Key for Model Protection
- Authors: MaungMaung AprilPyone and Hitoshi Kiya
- Abstract summary: We propose a model protection method by using block-wise pixel shuffling with a secret key as a preprocessing technique to input images.
Experiment results show that the performance of the protected model is close to that of non-protected models when the key is correct.
- Score: 17.551718914117917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a model protection method by using block-wise pixel
shuffling with a secret key as a preprocessing technique to input images for
the first time. The protected model is built by training with such preprocessed
images. Experiment results show that the performance of the protected model is
close to that of non-protected models when the key is correct, while the
accuracy is severely dropped when an incorrect key is given, and the proposed
model protection is robust against not only brute-force attacks but also
fine-tuning attacks, while maintaining almost the same performance accuracy as
that of using a non-protected model.
Related papers
- What Makes and Breaks Safety Fine-tuning? A Mechanistic Study [64.9691741899956]
Safety fine-tuning helps align Large Language Models (LLMs) with human preferences for their safe deployment.
We design a synthetic data generation framework that captures salient aspects of an unsafe input.
Using this, we investigate three well-known safety fine-tuning methods.
arXiv Detail & Related papers (2024-07-14T16:12:57Z) - ModelLock: Locking Your Model With a Spell [90.36433941408536]
A diffusion-based framework dubbed ModelLock explores text-guided image editing to transform the training data into unique styles or add new objects in the background.
A model finetuned on this edited dataset will be locked and can only be unlocked by the key prompt, i.e., the text prompt used to transform the data.
We conduct extensive experiments on both image classification and segmentation tasks, and show that ModelLock can effectively lock the finetuned models without significantly reducing the expected performance.
arXiv Detail & Related papers (2024-05-25T15:52:34Z) - Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness [52.9493817508055]
We propose Pre-trained Model Guided Adversarial Fine-Tuning (PMG-AFT) to enhance the model's zero-shot adversarial robustness.
Our approach consistently improves clean accuracy by an average of 8.72%.
arXiv Detail & Related papers (2024-01-09T04:33:03Z) - An Encryption Method of ConvMixer Models without Performance Degradation [14.505867475659276]
We propose an encryption method for ConvMixer models with a secret key.
The effectiveness of the proposed method is evaluated in terms of classification accuracy and model protection.
arXiv Detail & Related papers (2022-07-25T07:09:16Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Membership Privacy Protection for Image Translation Models via
Adversarial Knowledge Distillation [60.20442796180881]
Image-to-image translation models are vulnerable to the Membership Inference Attack (MIA)
We propose adversarial knowledge distillation (AKD) as a defense method against MIAs for image-to-image translation models.
We conduct experiments on the image-to-image translation models and show that AKD achieves the state-of-the-art utility-privacy tradeoff.
arXiv Detail & Related papers (2022-03-10T07:44:18Z) - A Protection Method of Trained CNN Model Using Feature Maps Transformed
With Secret Key From Unauthorized Access [15.483078145498085]
We propose a model protection method for convolutional neural networks (CNNs) with a secret key.
The proposed method applies a block-wise transformation with a secret key to feature maps in the network.
arXiv Detail & Related papers (2021-09-01T07:47:05Z) - Protecting Semantic Segmentation Models by Using Block-wise Image
Encryption with Secret Key from Unauthorized Access [13.106063755117399]
We propose to protect semantic segmentation models from unauthorized access by utilizing block-wise transformation with a secret key.
Experiment results show that the proposed protection method allows rightful users with the correct key to access the model to full capacity and deteriorate the performance for unauthorized users.
arXiv Detail & Related papers (2021-07-20T09:31:15Z) - A Protection Method of Trained CNN Model with Secret Key from
Unauthorized Access [15.483078145498085]
We propose a novel method for protecting convolutional neural network (CNN) models with a secret key set.
The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access.
arXiv Detail & Related papers (2021-05-31T07:37:33Z) - Transfer Learning-Based Model Protection With Secret Key [15.483078145498085]
We propose a novel method for protecting trained models with a secret key.
In experiments with the ImageNet dataset, it is shown that the performance of a protected model was close to that of a non-protected model when the correct key was given.
arXiv Detail & Related papers (2021-03-05T08:12:11Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.