An Encryption Method of ConvMixer Models without Performance Degradation
- URL: http://arxiv.org/abs/2207.11939v1
- Date: Mon, 25 Jul 2022 07:09:16 GMT
- Title: An Encryption Method of ConvMixer Models without Performance Degradation
- Authors: Ryota Iijima and Hitoshi Kiya
- Abstract summary: We propose an encryption method for ConvMixer models with a secret key.
The effectiveness of the proposed method is evaluated in terms of classification accuracy and model protection.
- Score: 14.505867475659276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose an encryption method for ConvMixer models with a
secret key. Encryption methods for DNN models have been studied to achieve
adversarial defense, model protection and privacy-preserving image
classification. However, the use of conventional encryption methods degrades
the performance of models compared with that of plain models. Accordingly, we
propose a novel method for encrypting ConvMixer models. The method is carried
out on the basis of an embedding architecture that ConvMixer has, and models
encrypted with the method can have the same performance as models trained with
plain images only when using test images encrypted with a secret key. In
addition, the proposed method does not require any specially prepared data for
model training or network modification. In an experiment, the effectiveness of
the proposed method is evaluated in terms of classification accuracy and model
protection in an image classification task on the CIFAR10 dataset.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - A Training-Free Defense Framework for Robust Learned Image Compression [48.41990144764295]
We study the robustness of learned image compression models against adversarial attacks.
We present a training-free defense technique based on simple image transform functions.
arXiv Detail & Related papers (2024-01-22T12:50:21Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models [79.71665540122498]
We propose a method for detecting unauthorized data usage by planting the injected content into the protected dataset.
Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions.
By analyzing whether the model has memorized the injected content, we can detect models that had illegally utilized the unauthorized data.
arXiv Detail & Related papers (2023-07-06T16:27:39Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - A Privacy Preserving Method with a Random Orthogonal Matrix for
ConvMixer Models [13.653940190782146]
A privacy preserving image classification method is proposed under the use of ConvMixer models.
The proposed method allows us to use the same classification accuracy as that of ConvMixer models without considering privacy protection.
arXiv Detail & Related papers (2023-01-10T08:21:19Z) - Image and Model Transformation with Secret Key for Vision Transformer [16.055655429920993]
We show for the first time that models trained with plain images can be directly transformed to models trained with encrypted images.
The performance of the transformed models is the same as models trained with plain images when using test images encrypted with the key.
arXiv Detail & Related papers (2022-07-12T08:02:47Z) - Protecting Semantic Segmentation Models by Using Block-wise Image
Encryption with Secret Key from Unauthorized Access [13.106063755117399]
We propose to protect semantic segmentation models from unauthorized access by utilizing block-wise transformation with a secret key.
Experiment results show that the proposed protection method allows rightful users with the correct key to access the model to full capacity and deteriorate the performance for unauthorized users.
arXiv Detail & Related papers (2021-07-20T09:31:15Z) - A Protection Method of Trained CNN Model with Secret Key from
Unauthorized Access [15.483078145498085]
We propose a novel method for protecting convolutional neural network (CNN) models with a secret key set.
The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access.
arXiv Detail & Related papers (2021-05-31T07:37:33Z) - Transfer Learning-Based Model Protection With Secret Key [15.483078145498085]
We propose a novel method for protecting trained models with a secret key.
In experiments with the ImageNet dataset, it is shown that the performance of a protected model was close to that of a non-protected model when the correct key was given.
arXiv Detail & Related papers (2021-03-05T08:12:11Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.