Access Control with Encrypted Feature Maps for Object Detection Models
- URL: http://arxiv.org/abs/2209.14831v1
- Date: Thu, 29 Sep 2022 14:46:04 GMT
- Title: Access Control with Encrypted Feature Maps for Object Detection Models
- Authors: Teru Nagamori, Hiroki Ito, AprilPyone MaungMaung, Hitoshi Kiya
- Abstract summary: In this paper, we propose an access control method with a secret key for object detection models.
selected feature maps are encrypted with a secret key for training and testing models.
In an experiment, the protected models allowed authorized users to obtain almost the same performance as that of non-protected models.
- Score: 10.925242558525683
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose an access control method with a secret key for
object detection models for the first time so that unauthorized users without a
secret key cannot benefit from the performance of trained models. The method
enables us not only to provide a high detection performance to authorized users
but to also degrade the performance for unauthorized users. The use of
transformed images was proposed for the access control of image classification
models, but these images cannot be used for object detection models due to
performance degradation. Accordingly, in this paper, selected feature maps are
encrypted with a secret key for training and testing models, instead of input
images. In an experiment, the protected models allowed authorized users to
obtain almost the same performance as that of non-protected models but also
with robustness against unauthorized access without a key.
Related papers
- ModelLock: Locking Your Model With a Spell [90.36433941408536]
A diffusion-based framework dubbed ModelLock explores text-guided image editing to transform the training data into unique styles or add new objects in the background.
A model finetuned on this edited dataset will be locked and can only be unlocked by the key prompt, i.e., the text prompt used to transform the data.
We conduct extensive experiments on both image classification and segmentation tasks, and show that ModelLock can effectively lock the finetuned models without significantly reducing the expected performance.
arXiv Detail & Related papers (2024-05-25T15:52:34Z) - DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models [79.71665540122498]
We propose a method for detecting unauthorized data usage by planting the injected content into the protected dataset.
Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions.
By analyzing whether the model has memorized the injected content, we can detect models that had illegally utilized the unauthorized data.
arXiv Detail & Related papers (2023-07-06T16:27:39Z) - Mask and Restore: Blind Backdoor Defense at Test Time with Masked
Autoencoder [57.739693628523]
We propose a framework for blind backdoor defense with Masked AutoEncoder (BDMAE)
BDMAE detects possible triggers in the token space using image structural similarity and label consistency between the test image and MAE restorations.
Our approach is blind to the model restorations, trigger patterns and image benignity.
arXiv Detail & Related papers (2023-03-27T19:23:33Z) - Access Control of Semantic Segmentation Models Using Encrypted Feature
Maps [12.29209267739635]
We propose an access control method with a secret key for semantic segmentation models.
selected feature maps are encrypted with a secret key for training and testing models.
In an experiment, the protected models allowed authorized users to obtain almost the same performance as that of non-protected models.
arXiv Detail & Related papers (2022-06-11T05:02:01Z) - Access Control of Object Detection Models Using Encrypted Feature Maps [10.925242558525683]
We propose an access control method for object detection models.
The use of encrypted images or encrypted feature maps has been demonstrated to be effective in access control of models from unauthorized access.
arXiv Detail & Related papers (2022-02-01T07:52:38Z) - Protection of SVM Model with Secret Key from Unauthorized Access [13.106063755117399]
We propose a block-wise image transformation method with a secret key for support vector machine (SVM) models.
Models trained by using transformed images offer a poor performance to unauthorized users without a key, while they can offer a high performance to authorized users with a key.
arXiv Detail & Related papers (2021-11-17T06:41:51Z) - PASS: An ImageNet replacement for self-supervised pretraining without
humans [152.3252728876108]
We propose an unlabelled dataset PASS: Pictures without humAns for Self-Supervision.
PASS only contains images with CC-BY license and complete attribution metadata, addressing the copyright issue.
We show that PASS can be used for pretraining with methods such as MoCo-v2, SwAV and DINO.
PASS does not make existing datasets obsolete, as for instance it is insufficient for benchmarking. However, it shows that model pretraining is often possible while using safer data, and it also provides the basis for a more robust evaluation of pretraining methods.
arXiv Detail & Related papers (2021-09-27T17:59:39Z) - Anti-Neuron Watermarking: Protecting Personal Data Against Unauthorized
Neural Model Training [50.308254937851814]
Personal data (e.g. images) could be exploited inappropriately to train deep neural network models without authorization.
By embedding a watermarking signature using specialized linear color transformation to user images, neural models will be imprinted with such a signature.
This is the first work to protect users' personal data from unauthorized usage in neural network training.
arXiv Detail & Related papers (2021-09-18T22:10:37Z) - Access Control Using Spatially Invariant Permutation of Feature Maps for
Semantic Segmentation Models [13.106063755117399]
We propose an access control method that uses the spatially invariant permutation of feature maps with a secret key for protecting semantic segmentation models.
The proposed method allows rightful users with the correct key not only to access a model to full capacity but also to degrade the performance for unauthorized users.
arXiv Detail & Related papers (2021-09-03T06:23:42Z) - A Protection Method of Trained CNN Model with Secret Key from
Unauthorized Access [15.483078145498085]
We propose a novel method for protecting convolutional neural network (CNN) models with a secret key set.
The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access.
arXiv Detail & Related papers (2021-05-31T07:37:33Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.