Protection of SVM Model with Secret Key from Unauthorized Access
- URL: http://arxiv.org/abs/2111.08927v1
- Date: Wed, 17 Nov 2021 06:41:51 GMT
- Title: Protection of SVM Model with Secret Key from Unauthorized Access
- Authors: Ryota Iijima, AprilPyone MaungMaung, Hitoshi Kiya
- Abstract summary: We propose a block-wise image transformation method with a secret key for support vector machine (SVM) models.
Models trained by using transformed images offer a poor performance to unauthorized users without a key, while they can offer a high performance to authorized users with a key.
- Score: 13.106063755117399
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a block-wise image transformation method with a
secret key for support vector machine (SVM) models. Models trained by using
transformed images offer a poor performance to unauthorized users without a
key, while they can offer a high performance to authorized users with a key.
The proposed method is demonstrated to be robust enough against unauthorized
access even under the use of kernel functions in a facial recognition
experiment.
Related papers
- ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Publicly-Verifiable Deletion via Target-Collapsing Functions [81.13800728941818]
We show that targetcollapsing enables publiclyverifiable deletion (PVD)
We build on this framework to obtain a variety of primitives supporting publiclyverifiable deletion from weak cryptographic assumptions.
arXiv Detail & Related papers (2023-03-15T15:00:20Z) - Access Control with Encrypted Feature Maps for Object Detection Models [10.925242558525683]
In this paper, we propose an access control method with a secret key for object detection models.
selected feature maps are encrypted with a secret key for training and testing models.
In an experiment, the protected models allowed authorized users to obtain almost the same performance as that of non-protected models.
arXiv Detail & Related papers (2022-09-29T14:46:04Z) - An Access Control Method with Secret Key for Semantic Segmentation
Models [12.27887776401573]
A novel method for access control with a secret key is proposed to protect models from unauthorized access.
We focus on semantic segmentation models with the vision transformer (ViT), called segmentation transformer (SETR)
arXiv Detail & Related papers (2022-08-28T04:09:36Z) - Access Control of Semantic Segmentation Models Using Encrypted Feature
Maps [12.29209267739635]
We propose an access control method with a secret key for semantic segmentation models.
selected feature maps are encrypted with a secret key for training and testing models.
In an experiment, the protected models allowed authorized users to obtain almost the same performance as that of non-protected models.
arXiv Detail & Related papers (2022-06-11T05:02:01Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Protecting Semantic Segmentation Models by Using Block-wise Image
Encryption with Secret Key from Unauthorized Access [13.106063755117399]
We propose to protect semantic segmentation models from unauthorized access by utilizing block-wise transformation with a secret key.
Experiment results show that the proposed protection method allows rightful users with the correct key to access the model to full capacity and deteriorate the performance for unauthorized users.
arXiv Detail & Related papers (2021-07-20T09:31:15Z) - A Protection Method of Trained CNN Model with Secret Key from
Unauthorized Access [15.483078145498085]
We propose a novel method for protecting convolutional neural network (CNN) models with a secret key set.
The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access.
arXiv Detail & Related papers (2021-05-31T07:37:33Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.