Fostering the Robustness of White-Box Deep Neural Network Watermarks by
Neuron Alignment
- URL: http://arxiv.org/abs/2112.14108v1
- Date: Tue, 28 Dec 2021 12:12:09 GMT
- Title: Fostering the Robustness of White-Box Deep Neural Network Watermarks by
Neuron Alignment
- Authors: Fang-Qi Li, Shi-Lin Wang, Yun Zhu
- Abstract summary: This paper presents a procedure that aligns neurons into the same order as when the watermark is embedded, so the watermark can be correctly recognized.
It significantly facilitates the functionality of established deep neural network watermarking schemes.
- Score: 6.706652133049011
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The wide application of deep learning techniques is boosting the regulation
of deep learning models, especially deep neural networks (DNN), as commercial
products. A necessary prerequisite for such regulations is identifying the
owner of deep neural networks, which is usually done through the watermark.
Current DNN watermarking schemes, particularly white-box ones, are uniformly
fragile against a family of functionality equivalence attacks, especially the
neuron permutation. This operation can effortlessly invalidate the ownership
proof and escape copyright regulations. To enhance the robustness of white-box
DNN watermarking schemes, this paper presents a procedure that aligns neurons
into the same order as when the watermark is embedded, so the watermark can be
correctly recognized. This neuron alignment process significantly facilitates
the functionality of established deep neural network watermarking schemes.
Related papers
- FreeMark: A Non-Invasive White-Box Watermarking for Deep Neural Networks [5.937758152593733]
FreeMark is a novel framework for watermarking deep neural networks (DNNs)
Unlike traditional watermarking methods, FreeMark innovatively generates secret keys from a pre-generated watermark vector and the host model using gradient descent.
Experiments demonstrate that FreeMark effectively resists various watermark removal attacks while maintaining high watermark capacity.
arXiv Detail & Related papers (2024-09-16T05:05:03Z) - DeepEclipse: How to Break White-Box DNN-Watermarking Schemes [60.472676088146436]
We present obfuscation techniques that significantly differ from the existing white-box watermarking removal schemes.
DeepEclipse can evade watermark detection without prior knowledge of the underlying watermarking scheme.
Our evaluation reveals that DeepEclipse excels in breaking multiple white-box watermarking schemes.
arXiv Detail & Related papers (2024-03-06T10:24:47Z) - Reversible Quantization Index Modulation for Static Deep Neural Network
Watermarking [57.96787187733302]
Reversible data hiding (RDH) methods offer a potential solution, but existing approaches suffer from weaknesses in terms of usability, capacity, and fidelity.
We propose a novel RDH-based static DNN watermarking scheme using quantization index modulation (QIM)
Our scheme incorporates a novel approach based on a one-dimensional quantizer for watermark embedding.
arXiv Detail & Related papers (2023-05-29T04:39:17Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Rethinking White-Box Watermarks on Deep Learning Models under Neural
Structural Obfuscation [24.07604618918671]
Copyright protection for deep neural networks (DNNs) is an urgent need for AI corporations.
White-box watermarking is believed to be accurate, credible and secure against most known watermark removal attacks.
We present the first systematic study on how the mainstream white-box watermarks are commonly vulnerable to neural structural obfuscation with textitdummy neurons.
arXiv Detail & Related papers (2023-03-17T02:21:41Z) - "And Then There Were None": Cracking White-box DNN Watermarks via
Invariant Neuron Transforms [29.76685892624105]
We present the first effective removal attack which cracks almost all the existing white-box watermarking schemes.
Our attack requires no prior knowledge on the training data distribution or the adopted watermark algorithms, and leaves model functionality intact.
arXiv Detail & Related papers (2022-04-30T08:33:32Z) - Knowledge-Free Black-Box Watermark and Ownership Proof for Image
Classification Neural Networks [9.117248639119529]
We propose a knowledge-free black-box watermarking scheme for image classification neural networks.
A delicate encoding and verification protocol is designed to ensure the scheme's knowledgable security against adversaries.
Experiment results proved the functionality-preserving capability and security of the proposed watermarking scheme.
arXiv Detail & Related papers (2022-04-09T18:09:02Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Reversible Watermarking in Deep Convolutional Neural Networks for
Integrity Authentication [78.165255859254]
We propose a reversible watermarking algorithm for integrity authentication.
The influence of embedding reversible watermarking on the classification performance is less than 0.5%.
At the same time, the integrity of the model can be verified by applying the reversible watermarking.
arXiv Detail & Related papers (2021-04-09T09:32:21Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Neural Network Laundering: Removing Black-Box Backdoor Watermarks from
Deep Neural Networks [17.720400846604907]
We propose a neural network "laundering" algorithm to remove black-box backdoor watermarks from neural networks.
For all backdoor watermarking methods addressed in this paper, we find that the robustness of the watermark is significantly weaker than the original claims.
arXiv Detail & Related papers (2020-04-22T19:02:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.