Passport-aware Normalization for Deep Model Protection
- URL: http://arxiv.org/abs/2010.15824v2
- Date: Tue, 3 Nov 2020 16:19:59 GMT
- Title: Passport-aware Normalization for Deep Model Protection
- Authors: Jie Zhang and Dongdong Chen and Jing Liao and Weiming Zhang and Gang
Hua and Nenghai Yu
- Abstract summary: We propose a new passport-aware normalization formulation for deep learning models.
It only needs to add another passport-aware branch for IP protection.
It is demonstrated to be robust not only to common attack techniques like fine-tuning and model compression, but also to ambiguity attacks.
- Score: 122.61289882357022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite tremendous success in many application scenarios, deep learning faces
serious intellectual property (IP) infringement threats. Considering the cost
of designing and training a good model, infringements will significantly
infringe the interests of the original model owner. Recently, many impressive
works have emerged for deep model IP protection. However, they either are
vulnerable to ambiguity attacks, or require changes in the target network
structure by replacing its original normalization layers and hence cause
significant performance drops. To this end, we propose a new passport-aware
normalization formulation, which is generally applicable to most existing
normalization layers and only needs to add another passport-aware branch for IP
protection. This new branch is jointly trained with the target model but
discarded in the inference stage. Therefore it causes no structure change in
the target model. Only when the model IP is suspected to be stolen by someone,
the private passport-aware branch is added back for ownership verification.
Through extensive experiments, we verify its effectiveness in both image and 3D
point recognition models. It is demonstrated to be robust not only to common
attack techniques like fine-tuning and model compression, but also to ambiguity
attacks. By further combining it with trigger-set based methods, both black-box
and white-box verification can be achieved for enhanced security of deep
learning models deployed in real systems. Code can be found at
https://github.com/ZJZAC/Passport-aware-Normalization.
Related papers
- IDEA: An Inverse Domain Expert Adaptation Based Active DNN IP Protection Method [8.717704777664604]
Illegitimate reproduction, distribution and derivation of Deep Neural Network (DNN) models can inflict economic loss, reputation damage and even privacy infringement.
We propose IDEA, an Inverse Domain Expert Adaptation based proactive DNN IP protection method featuring active authorization and source traceability.
We extensively evaluate IDEA on five datasets and four DNN models to demonstrate its effectiveness in authorization control, culprit tracing success rate, and against various attacks.
arXiv Detail & Related papers (2024-09-29T09:34:33Z) - Stealth edits to large language models [76.53356051271014]
We show that a single metric can be used to assess a model's editability.
We also reveal the vulnerability of language models to stealth attacks.
arXiv Detail & Related papers (2024-06-18T14:43:18Z) - Steganographic Passport: An Owner and User Verifiable Credential for Deep Model IP Protection Without Retraining [9.617679554145301]
Current passport-based methods that obfuscate model functionality for license-to-use and ownership verifications suffer from capacity and quality constraints.
We propose Steganographic Passport, which uses an invertible steganographic network to decouple license-to-use from ownership verification.
An irreversible and collision-resistant hash function is used to avoid exposing the owner-side passport from the derived user-side passports.
arXiv Detail & Related papers (2024-04-03T17:44:02Z) - EncryIP: A Practical Encryption-Based Framework for Model Intellectual
Property Protection [17.655627250882805]
This paper introduces a practical encryption-based framework called textitEncryIP.
It seamlessly integrates a public-key encryption scheme into the model learning process.
It demonstrates superior effectiveness in both training protected models and efficiently detecting the unauthorized spread of ML models.
arXiv Detail & Related papers (2023-12-19T11:11:03Z) - Physical Invisible Backdoor Based on Camera Imaging [32.30547033643063]
Current backdoor attacks require changing pixels of clean images.
This paper proposes a novel physical invisible backdoor based on camera imaging without changing nature image pixels.
arXiv Detail & Related papers (2023-09-14T04:58:06Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Deep Model Intellectual Property Protection via Deep Watermarking [122.87871873450014]
Deep neural networks are exposed to serious IP infringement risks.
Given a target deep model, if the attacker knows its full information, it can be easily stolen by fine-tuning.
We propose a new model watermarking framework for protecting deep networks trained for low-level computer vision or image processing tasks.
arXiv Detail & Related papers (2021-03-08T18:58:21Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.