Performance-lossless Black-box Model Watermarking
- URL: http://arxiv.org/abs/2312.06488v2
- Date: Sun, 14 Apr 2024 04:35:25 GMT
- Title: Performance-lossless Black-box Model Watermarking
- Authors: Na Zhao, Kejiang Chen, Weiming Zhang, Nenghai Yu,
- Abstract summary: We propose a branch backdoor-based model watermarking protocol to protect model intellectual property.
In addition, we analyze the potential threats to the protocol and provide a secure and feasible watermarking instance for language models.
- Score: 69.22653003059031
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of deep learning, high-value and high-cost models have become valuable assets, and related intellectual property protection technologies have become a hot topic. However, existing model watermarking work in black-box scenarios mainly originates from training-based backdoor methods, which probably degrade primary task performance. To address this, we propose a branch backdoor-based model watermarking protocol to protect model intellectual property, where a construction based on a message authentication scheme is adopted as the branch indicator after a comparative analysis with secure cryptographic technologies primitives. We prove the lossless performance of the protocol by reduction. In addition, we analyze the potential threats to the protocol and provide a secure and feasible watermarking instance for language models.
Related papers
- Embedding Watermarks in Diffusion Process for Model Intellectual Property Protection [16.36712147596369]
We introduce a novel watermarking framework by embedding the watermark into the whole diffusion process.
Detailed theoretical analysis and experimental validation demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2024-10-29T18:27:10Z) - ModelShield: Adaptive and Robust Watermark against Model Extraction Attack [58.46326901858431]
Large language models (LLMs) demonstrate general intelligence across a variety of machine learning tasks.
adversaries can still utilize model extraction attacks to steal the model intelligence encoded in model generation.
Watermarking technology offers a promising solution for defending against such attacks by embedding unique identifiers into the model-generated content.
arXiv Detail & Related papers (2024-05-03T06:41:48Z) - AttackNet: Enhancing Biometric Security via Tailored Convolutional Neural Network Architectures for Liveness Detection [20.821562115822182]
AttackNet is a bespoke Convolutional Neural Network architecture designed to combat spoofing threats in biometric systems.
It offers a layered defense mechanism, seamlessly transitioning from low-level feature extraction to high-level pattern discernment.
Benchmarking our model across diverse datasets affirms its prowess, showcasing superior performance metrics in comparison to contemporary models.
arXiv Detail & Related papers (2024-02-06T07:22:50Z) - EncryIP: A Practical Encryption-Based Framework for Model Intellectual
Property Protection [17.655627250882805]
This paper introduces a practical encryption-based framework called textitEncryIP.
It seamlessly integrates a public-key encryption scheme into the model learning process.
It demonstrates superior effectiveness in both training protected models and efficiently detecting the unauthorized spread of ML models.
arXiv Detail & Related papers (2023-12-19T11:11:03Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Deep Model Intellectual Property Protection via Deep Watermarking [122.87871873450014]
Deep neural networks are exposed to serious IP infringement risks.
Given a target deep model, if the attacker knows its full information, it can be easily stolen by fine-tuning.
We propose a new model watermarking framework for protecting deep networks trained for low-level computer vision or image processing tasks.
arXiv Detail & Related papers (2021-03-08T18:58:21Z) - A Systematic Review on Model Watermarking for Neural Networks [1.2691047660244335]
This work presents a taxonomy identifying and analyzing different classes of watermarking schemes for machine learning models.
It introduces a unified threat model to allow structured reasoning on and comparison of the effectiveness of watermarking methods.
It systematizes desired security requirements and attacks against ML model watermarking.
arXiv Detail & Related papers (2020-09-25T12:03:02Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.