Entangled Watermarks as a Defense against Model Extraction
- URL: http://arxiv.org/abs/2002.12200v2
- Date: Fri, 19 Feb 2021 15:07:24 GMT
- Title: Entangled Watermarks as a Defense against Model Extraction
- Authors: Hengrui Jia, Christopher A. Choquette-Choo, Varun Chandrasekaran,
Nicolas Papernot
- Abstract summary: Entangled Watermarking Embeddings (EWE) are used to protect machine learning models fromExtraction attacks.
EWE learns features for classifying data that is sampled from the task distribution and data that encodes watermarks.
Experiments on MNIST, Fashion-MNIST, CIFAR-10, and Speech Commands validate that the defender can claim model ownership with 95% confidence with less than 100 queries to the stolen copy.
- Score: 42.74645868767025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning involves expensive data collection and training procedures.
Model owners may be concerned that valuable intellectual property can be leaked
if adversaries mount model extraction attacks. As it is difficult to defend
against model extraction without sacrificing significant prediction accuracy,
watermarking instead leverages unused model capacity to have the model overfit
to outlier input-output pairs. Such pairs are watermarks, which are not sampled
from the task distribution and are only known to the defender. The defender
then demonstrates knowledge of the input-output pairs to claim ownership of the
model at inference. The effectiveness of watermarks remains limited because
they are distinct from the task distribution and can thus be easily removed
through compression or other forms of knowledge transfer.
We introduce Entangled Watermarking Embeddings (EWE). Our approach encourages
the model to learn features for classifying data that is sampled from the task
distribution and data that encodes watermarks. An adversary attempting to
remove watermarks that are entangled with legitimate data is also forced to
sacrifice performance on legitimate data. Experiments on MNIST, Fashion-MNIST,
CIFAR-10, and Speech Commands validate that the defender can claim model
ownership with 95\% confidence with less than 100 queries to the stolen copy,
at a modest cost below 0.81 percentage points on average in the defended
model's performance.
Related papers
- Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution [22.933101948176606]
backdoor-based model watermarks are the primary and cutting-edge methods to implant such properties in released models.
We design a new watermarking paradigm, $i.e.$, Explanation as a Watermark (EaaW), that implants verification behaviors into the explanation of feature attribution.
arXiv Detail & Related papers (2024-05-08T05:49:46Z) - Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning [51.26221422507554]
Federated learning (FL) is an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training.
Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user.
To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it.
We propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL.
arXiv Detail & Related papers (2023-12-06T00:47:55Z) - ClearMark: Intuitive and Robust Model Watermarking via Transposed Model
Training [50.77001916246691]
This paper introduces ClearMark, the first DNN watermarking method designed for intuitive human assessment.
ClearMark embeds visible watermarks, enabling human decision-making without rigid value thresholds.
It shows an 8,544-bit watermark capacity comparable to the strongest existing work.
arXiv Detail & Related papers (2023-10-25T08:16:55Z) - Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - On Function-Coupled Watermarks for Deep Neural Networks [15.478746926391146]
We propose a novel DNN watermarking solution that can effectively defend against watermark removal attacks.
Our key insight is to enhance the coupling of the watermark and model functionalities.
Results show a 100% watermark authentication success rate under aggressive watermark removal attacks.
arXiv Detail & Related papers (2023-02-08T05:55:16Z) - Defending against Model Stealing via Verifying Embedded External
Features [90.29429679125508]
adversaries can steal' deployed models even when they have no training samples and can not get access to the model parameters or structures.
We explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified emphexternal features.
Our method is effective in detecting different types of model stealing simultaneously, even if the stolen model is obtained via a multi-stage stealing process.
arXiv Detail & Related papers (2021-12-07T03:51:54Z) - Dataset Inference: Ownership Resolution in Machine Learning [18.248121977353506]
knowledge contained in stolen model's training set is what is common to all stolen copies.
We introduce $dataset$ $inference, the process of identifying whether a suspected model copy has private knowledge from the original model's dataset.
Experiments on CIFAR10, SVHN, CIFAR100 and ImageNet show that model owners can claim with confidence greater than 99% that their model (or dataset as a matter of fact) was stolen.
arXiv Detail & Related papers (2021-04-21T18:12:18Z) - Removing Backdoor-Based Watermarks in Neural Networks with Limited Data [26.050649487499626]
Trading deep models is highly demanded and lucrative nowadays.
naive trading schemes typically involve potential risks related to copyright and trustworthiness issues.
We propose a novel backdoor-based watermark removal framework using limited data, dubbed WILD.
arXiv Detail & Related papers (2020-08-02T06:25:26Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.