Access Control of Semantic Segmentation Models Using Encrypted Feature
Maps
- URL: http://arxiv.org/abs/2206.05422v1
- Date: Sat, 11 Jun 2022 05:02:01 GMT
- Title: Access Control of Semantic Segmentation Models Using Encrypted Feature
Maps
- Authors: Hiroki Ito, AprilPyone MaungMaung, Sayaka Shiota, Hitoshi Kiya
- Abstract summary: We propose an access control method with a secret key for semantic segmentation models.
selected feature maps are encrypted with a secret key for training and testing models.
In an experiment, the protected models allowed authorized users to obtain almost the same performance as that of non-protected models.
- Score: 12.29209267739635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose an access control method with a secret key for
semantic segmentation models for the first time so that unauthorized users
without a secret key cannot benefit from the performance of trained models. The
method enables us not only to provide a high segmentation performance to
authorized users but to also degrade the performance for unauthorized users. We
first point out that, for the application of semantic segmentation,
conventional access control methods which use encrypted images for
classification tasks are not directly applicable due to performance
degradation. Accordingly, in this paper, selected feature maps are encrypted
with a secret key for training and testing models, instead of input images. In
an experiment, the protected models allowed authorized users to obtain almost
the same performance as that of non-protected models but also with robustness
against unauthorized access without a key.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - ModelLock: Locking Your Model With a Spell [90.36433941408536]
A diffusion-based framework dubbed ModelLock explores text-guided image editing to transform the training data into unique styles or add new objects in the background.
A model finetuned on this edited dataset will be locked and can only be unlocked by the key prompt, i.e., the text prompt used to transform the data.
We conduct extensive experiments on both image classification and segmentation tasks, and show that ModelLock can effectively lock the finetuned models without significantly reducing the expected performance.
arXiv Detail & Related papers (2024-05-25T15:52:34Z) - Match me if you can: Semi-Supervised Semantic Correspondence Learning with Unpaired Images [76.47980643420375]
This paper builds on the hypothesis that there is an inherent data-hungry matter in learning semantic correspondences.
We demonstrate a simple machine annotator reliably enriches paired key points via machine supervision.
Our models surpass current state-of-the-art models on semantic correspondence learning benchmarks like SPair-71k, PF-PASCAL, and PF-WILLOW.
arXiv Detail & Related papers (2023-11-30T13:22:15Z) - Access Control with Encrypted Feature Maps for Object Detection Models [10.925242558525683]
In this paper, we propose an access control method with a secret key for object detection models.
selected feature maps are encrypted with a secret key for training and testing models.
In an experiment, the protected models allowed authorized users to obtain almost the same performance as that of non-protected models.
arXiv Detail & Related papers (2022-09-29T14:46:04Z) - An Access Control Method with Secret Key for Semantic Segmentation
Models [12.27887776401573]
A novel method for access control with a secret key is proposed to protect models from unauthorized access.
We focus on semantic segmentation models with the vision transformer (ViT), called segmentation transformer (SETR)
arXiv Detail & Related papers (2022-08-28T04:09:36Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Protection of SVM Model with Secret Key from Unauthorized Access [13.106063755117399]
We propose a block-wise image transformation method with a secret key for support vector machine (SVM) models.
Models trained by using transformed images offer a poor performance to unauthorized users without a key, while they can offer a high performance to authorized users with a key.
arXiv Detail & Related papers (2021-11-17T06:41:51Z) - Access Control Using Spatially Invariant Permutation of Feature Maps for
Semantic Segmentation Models [13.106063755117399]
We propose an access control method that uses the spatially invariant permutation of feature maps with a secret key for protecting semantic segmentation models.
The proposed method allows rightful users with the correct key not only to access a model to full capacity but also to degrade the performance for unauthorized users.
arXiv Detail & Related papers (2021-09-03T06:23:42Z) - Protecting Semantic Segmentation Models by Using Block-wise Image
Encryption with Secret Key from Unauthorized Access [13.106063755117399]
We propose to protect semantic segmentation models from unauthorized access by utilizing block-wise transformation with a secret key.
Experiment results show that the proposed protection method allows rightful users with the correct key to access the model to full capacity and deteriorate the performance for unauthorized users.
arXiv Detail & Related papers (2021-07-20T09:31:15Z) - Federated Learning of User Authentication Models [69.93965074814292]
We propose Federated User Authentication (FedUA), a framework for privacy-preserving training of machine learning models.
FedUA adopts federated learning framework to enable a group of users to jointly train a model without sharing the raw inputs.
We show our method is privacy-preserving, scalable with number of users, and allows new users to be added to training without changing the output layer.
arXiv Detail & Related papers (2020-07-09T08:04:38Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.