Protecting Semantic Segmentation Models by Using Block-wise Image
Encryption with Secret Key from Unauthorized Access
- URL: http://arxiv.org/abs/2107.09362v1
- Date: Tue, 20 Jul 2021 09:31:15 GMT
- Title: Protecting Semantic Segmentation Models by Using Block-wise Image
Encryption with Secret Key from Unauthorized Access
- Authors: Hiroki Ito, MaungMaung AprilPyone, Hitoshi Kiya
- Abstract summary: We propose to protect semantic segmentation models from unauthorized access by utilizing block-wise transformation with a secret key.
Experiment results show that the proposed protection method allows rightful users with the correct key to access the model to full capacity and deteriorate the performance for unauthorized users.
- Score: 13.106063755117399
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since production-level trained deep neural networks (DNNs) are of a great
business value, protecting such DNN models against copyright infringement and
unauthorized access is in a rising demand. However, conventional model
protection methods focused only the image classification task, and these
protection methods were never applied to semantic segmentation although it has
an increasing number of applications. In this paper, we propose to protect
semantic segmentation models from unauthorized access by utilizing block-wise
transformation with a secret key for the first time. Protected models are
trained by using transformed images. Experiment results show that the proposed
protection method allows rightful users with the correct key to access the
model to full capacity and deteriorate the performance for unauthorized users.
However, protected models slightly drop the segmentation performance compared
to non-protected models.
Related papers
- ModelShield: Adaptive and Robust Watermark against Model Extraction Attack [58.46326901858431]
Large language models (LLMs) demonstrate general intelligence across a variety of machine learning tasks.
adversaries can still utilize model extraction attacks to steal the model intelligence encoded in model generation.
Watermarking technology offers a promising solution for defending against such attacks by embedding unique identifiers into the model-generated content.
arXiv Detail & Related papers (2024-05-03T06:41:48Z) - Performance-lossless Black-box Model Watermarking [69.22653003059031]
We propose a branch backdoor-based model watermarking protocol to protect model intellectual property.
In addition, we analyze the potential threats to the protocol and provide a secure and feasible watermarking instance for language models.
arXiv Detail & Related papers (2023-12-11T16:14:04Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Access Control with Encrypted Feature Maps for Object Detection Models [10.925242558525683]
In this paper, we propose an access control method with a secret key for object detection models.
selected feature maps are encrypted with a secret key for training and testing models.
In an experiment, the protected models allowed authorized users to obtain almost the same performance as that of non-protected models.
arXiv Detail & Related papers (2022-09-29T14:46:04Z) - Access Control of Semantic Segmentation Models Using Encrypted Feature
Maps [12.29209267739635]
We propose an access control method with a secret key for semantic segmentation models.
selected feature maps are encrypted with a secret key for training and testing models.
In an experiment, the protected models allowed authorized users to obtain almost the same performance as that of non-protected models.
arXiv Detail & Related papers (2022-06-11T05:02:01Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Access Control Using Spatially Invariant Permutation of Feature Maps for
Semantic Segmentation Models [13.106063755117399]
We propose an access control method that uses the spatially invariant permutation of feature maps with a secret key for protecting semantic segmentation models.
The proposed method allows rightful users with the correct key not only to access a model to full capacity but also to degrade the performance for unauthorized users.
arXiv Detail & Related papers (2021-09-03T06:23:42Z) - Fingerprinting Image-to-Image Generative Adversarial Networks [53.02510603622128]
Generative Adversarial Networks (GANs) have been widely used in various application scenarios.
This paper presents a novel fingerprinting scheme for the Intellectual Property protection of image-to-image GANs based on a trusted third party.
arXiv Detail & Related papers (2021-06-19T06:25:10Z) - A Protection Method of Trained CNN Model with Secret Key from
Unauthorized Access [15.483078145498085]
We propose a novel method for protecting convolutional neural network (CNN) models with a secret key set.
The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access.
arXiv Detail & Related papers (2021-05-31T07:37:33Z) - Training DNN Model with Secret Key for Model Protection [17.551718914117917]
We propose a model protection method by using block-wise pixel shuffling with a secret key as a preprocessing technique to input images.
Experiment results show that the performance of the protected model is close to that of non-protected models when the key is correct.
arXiv Detail & Related papers (2020-08-06T04:25:59Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.