Probabilistic Selective Encryption of Convolutional Neural Networks for
Hierarchical Services
- URL: http://arxiv.org/abs/2105.12344v1
- Date: Wed, 26 May 2021 06:15:58 GMT
- Title: Probabilistic Selective Encryption of Convolutional Neural Networks for
Hierarchical Services
- Authors: Jinyu Tian, Jiantao Zhou, and Jia Duan
- Abstract summary: We propose a selective encryption (SE) algorithm to protect CNN models from unauthorized access.
Our algorithm selects important model parameters via the proposed Probabilistic Selection Strategy (PSS)
It then encrypts the most important parameters with the designed encryption method called Distribution Preserving Random Mask (DPRM)
- Score: 13.643603852209091
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Model protection is vital when deploying Convolutional Neural Networks (CNNs)
for commercial services, due to the massive costs of training them. In this
work, we propose a selective encryption (SE) algorithm to protect CNN models
from unauthorized access, with a unique feature of providing hierarchical
services to users. Our algorithm firstly selects important model parameters via
the proposed Probabilistic Selection Strategy (PSS). It then encrypts the most
important parameters with the designed encryption method called Distribution
Preserving Random Mask (DPRM), so as to maximize the performance degradation by
encrypting only a very small portion of model parameters. We also design a set
of access permissions, using which different amounts of the most important
model parameters can be decrypted. Hence, different levels of model performance
can be naturally provided for users. Experimental results demonstrate that the
proposed scheme could effectively protect the classification model VGG19 by
merely encrypting 8% parameters of convolutional layers. We also implement the
proposed model protection scheme in the denoising model DnCNN, showcasing the
hierarchical denoising services
Related papers
- Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Prompt Tuning for Parameter-efficient Medical Image Segmentation [79.09285179181225]
We propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets.
We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes.
We demonstrate that the resulting neural network model is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models.
arXiv Detail & Related papers (2022-11-16T21:55:05Z) - Federated Learning with Quantum Secure Aggregation [23.385315728881295]
The scheme is secure in protecting private model parameters from being disclosed to semi-honest attackers.
The proposed security mechanism ensures that any attempts to eavesdrop private model parameters can be immediately detected and stopped.
arXiv Detail & Related papers (2022-07-09T13:21:36Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Protecting Semantic Segmentation Models by Using Block-wise Image
Encryption with Secret Key from Unauthorized Access [13.106063755117399]
We propose to protect semantic segmentation models from unauthorized access by utilizing block-wise transformation with a secret key.
Experiment results show that the proposed protection method allows rightful users with the correct key to access the model to full capacity and deteriorate the performance for unauthorized users.
arXiv Detail & Related papers (2021-07-20T09:31:15Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - AdvParams: An Active DNN Intellectual Property Protection Technique via
Adversarial Perturbation Based Parameter Encryption [10.223780756303196]
We propose an effective framework to actively protect the DNN IP from infringement.
Specifically, we encrypt the DNN model's parameters by perturbing them with well-crafted adversarial perturbations.
After the encryption, the positions of encrypted parameters and the values of the added adversarial perturbations form a secret key.
arXiv Detail & Related papers (2021-05-28T09:42:35Z) - Extended Stochastic Block Models with Application to Criminal Networks [3.2211782521637393]
We study covert networks that encode relationships among criminals.
The coexistence of noisy block patterns limits the reliability of routinely-used community detection algorithms.
We develop a new class of extended block models (ESBM) that infer groups of nodes having common connectivity patterns.
arXiv Detail & Related papers (2020-07-16T19:06:16Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.