EncryIP: A Practical Encryption-Based Framework for Model Intellectual
Property Protection
- URL: http://arxiv.org/abs/2312.12049v1
- Date: Tue, 19 Dec 2023 11:11:03 GMT
- Title: EncryIP: A Practical Encryption-Based Framework for Model Intellectual
Property Protection
- Authors: Xin Mu, Yu Wang, Zhengan Huang, Junzuo Lai, Yehong Zhang, Hui Wang,
Yue Yu
- Abstract summary: This paper introduces a practical encryption-based framework called textitEncryIP.
It seamlessly integrates a public-key encryption scheme into the model learning process.
It demonstrates superior effectiveness in both training protected models and efficiently detecting the unauthorized spread of ML models.
- Score: 17.655627250882805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the rapidly growing digital economy, protecting intellectual property (IP)
associated with digital products has become increasingly important. Within this
context, machine learning (ML) models, being highly valuable digital assets,
have gained significant attention for IP protection. This paper introduces a
practical encryption-based framework called \textit{EncryIP}, which seamlessly
integrates a public-key encryption scheme into the model learning process. This
approach enables the protected model to generate randomized and confused
labels, ensuring that only individuals with accurate secret keys, signifying
authorized users, can decrypt and reveal authentic labels. Importantly, the
proposed framework not only facilitates the protected model to multiple
authorized users without requiring repetitive training of the original ML model
with IP protection methods but also maintains the model's performance without
compromising its accuracy. Compared to existing methods like watermark-based,
trigger-based, and passport-based approaches, \textit{EncryIP} demonstrates
superior effectiveness in both training protected models and efficiently
detecting the unauthorized spread of ML models.
Related papers
- SLIP: Securing LLMs IP Using Weights Decomposition [0.0]
Large language models (LLMs) have recently seen widespread adoption, in both academia and industry.
As these models grow, they become valuable intellectual property (IP), reflecting enormous investments by their owners.
Current methods to protect models' IP on the edge have limitations in terms of practicality, loss in accuracy, or suitability to requirements.
We introduce a novel hybrid inference algorithm, named SLIP, designed to protect edge-deployed models from theft.
arXiv Detail & Related papers (2024-07-15T16:37:55Z) - ModelShield: Adaptive and Robust Watermark against Model Extraction Attack [58.46326901858431]
Large language models (LLMs) demonstrate general intelligence across a variety of machine learning tasks.
adversaries can still utilize model extraction attacks to steal the model intelligence encoded in model generation.
Watermarking technology offers a promising solution for defending against such attacks by embedding unique identifiers into the model-generated content.
arXiv Detail & Related papers (2024-05-03T06:41:48Z) - Performance-lossless Black-box Model Watermarking [69.22653003059031]
We propose a branch backdoor-based model watermarking protocol to protect model intellectual property.
In addition, we analyze the potential threats to the protocol and provide a secure and feasible watermarking instance for language models.
arXiv Detail & Related papers (2023-12-11T16:14:04Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Protecting Semantic Segmentation Models by Using Block-wise Image
Encryption with Secret Key from Unauthorized Access [13.106063755117399]
We propose to protect semantic segmentation models from unauthorized access by utilizing block-wise transformation with a secret key.
Experiment results show that the proposed protection method allows rightful users with the correct key to access the model to full capacity and deteriorate the performance for unauthorized users.
arXiv Detail & Related papers (2021-07-20T09:31:15Z) - A Protection Method of Trained CNN Model with Secret Key from
Unauthorized Access [15.483078145498085]
We propose a novel method for protecting convolutional neural network (CNN) models with a secret key set.
The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access.
arXiv Detail & Related papers (2021-05-31T07:37:33Z) - Passport-aware Normalization for Deep Model Protection [122.61289882357022]
We propose a new passport-aware normalization formulation for deep learning models.
It only needs to add another passport-aware branch for IP protection.
It is demonstrated to be robust not only to common attack techniques like fine-tuning and model compression, but also to ambiguity attacks.
arXiv Detail & Related papers (2020-10-29T17:57:12Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.