Adversarial Example Based Fingerprinting for Robust Copyright Protection in Split Learning
- URL: http://arxiv.org/abs/2503.04825v1
- Date: Wed, 05 Mar 2025 06:07:16 GMT
- Title: Adversarial Example Based Fingerprinting for Robust Copyright Protection in Split Learning
- Authors: Zhangting Lin, Mingfu Xue, Kewei Chen, Wenmao Liu, Xiang Gao, Leo Yu Zhang, Jian Wang, Yushu Zhang,
- Abstract summary: We propose the first copyright protection scheme for Split Learning model, leveraging fingerprint to ensure effective and robust copyright protection.<n>This is demonstrated by a remarkable fingerprint verification success rate (FVSR) of 100% on MNIST, 98% on CIFAR-10, and 100% on ImageNet.
- Score: 17.08424946015621
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Currently, deep learning models are easily exposed to data leakage risks. As a distributed model, Split Learning thus emerged as a solution to address this issue. The model is splitted to avoid data uploading to the server and reduce computing requirements while ensuring data privacy and security. However, the transmission of data between clients and server creates a potential vulnerability. In particular, model is vulnerable to intellectual property (IP) infringement such as piracy. Alarmingly, a dedicated copyright protection framework tailored for Split Learning models is still lacking. To this end, we propose the first copyright protection scheme for Split Learning model, leveraging fingerprint to ensure effective and robust copyright protection. The proposed method first generates a set of specifically designed adversarial examples. Then, we select those examples that would induce misclassifications to form the fingerprint set. These adversarial examples are embedded as fingerprints into the model during the training process. Exhaustive experiments highlight the effectiveness of the scheme. This is demonstrated by a remarkable fingerprint verification success rate (FVSR) of 100% on MNIST, 98% on CIFAR-10, and 100% on ImageNet, respectively. Meanwhile, the model's accuracy only decreases slightly, indicating that the embedded fingerprints do not compromise model performance. Even under label inference attack, our approach consistently achieves a high fingerprint verification success rate that ensures robust verification.
Related papers
- Scalable Fingerprinting of Large Language Models [46.26999419117367]
We introduce a new method, dubbed Perinucleus sampling, to generate scalable, persistent, and harmless fingerprints.
We demonstrate that this scheme can add 24,576 fingerprints to a Llama-3.1-8B model without degrading the model's utility.
arXiv Detail & Related papers (2025-02-11T18:43:07Z) - FIT-Print: Towards False-claim-resistant Model Ownership Verification via Targeted Fingerprint [29.015707553430442]
Model fingerprinting is a widely adopted approach to safeguard the intellectual property rights of open-source models.<n>In this paper, we reveal that they are vulnerable to false claim attacks where adversaries falsely assert ownership of any third-party model.<n>Motivated by these findings, we propose a targeted fingerprinting paradigm (i.e., FIT-Print) to counteract false claim attacks.
arXiv Detail & Related papers (2025-01-26T13:00:58Z) - Sample Correlation for Fingerprinting Deep Face Recognition [83.53005932513156]
We propose a novel model stealing detection method based on SA Corremplelation (SAC)<n>SAC successfully defends against various model stealing attacks in deep face recognition, encompassing face verification and face emotion recognition, exhibiting the highest performance in terms of AUC, p-value and F1 score.<n>We extend our evaluation of SAC-JC to object recognition including Tiny-ImageNet and CIFAR10, which also demonstrates the superior performance of SAC-JC to previous methods.
arXiv Detail & Related papers (2024-12-30T07:37:06Z) - Instructional Fingerprinting of Large Language Models [57.72356846657551]
We present a pilot study on fingerprinting Large language models (LLMs) as a form of very lightweight instruction tuning.
Results on 11 popularly-used LLMs showed that this approach is lightweight and does not affect the normal behavior of the model.
It also prevents publisher overclaim, maintains robustness against fingerprint guessing and parameter-efficient training, and supports multi-stage fingerprinting akin to MIT License.
arXiv Detail & Related papers (2024-01-21T09:51:45Z) - Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning [51.26221422507554]
Federated learning (FL) is an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training.
Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user.
To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it.
We propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL.
arXiv Detail & Related papers (2023-12-06T00:47:55Z) - VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy
Leakage Fingerprints [16.564206424838485]
Deploying Machine Learning as a Service gives rise to model plagiarism, leading to copyright infringement.
We propose a novel ownership testing method called VeriDIP, which verifies a model's intellectual property.
arXiv Detail & Related papers (2023-09-07T01:58:12Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - FedRight: An Effective Model Copyright Protection for Federated Learning [3.387494280613737]
Federated learning (FL) implements model training and meanwhile protects local data privacy.
For the first time, we formalize the problem of copyright protection for FL.
We propose FedRight to protect model copyright based on model fingerprints.
arXiv Detail & Related papers (2023-03-18T11:47:54Z) - On the Robustness of Dataset Inference [21.321310557323383]
Machine learning (ML) models are costly to train as they can require a significant amount of data, computational resources and technical expertise.
Ownership verification techniques allow the victims of model stealing attacks to demonstrate that a suspect model was in fact stolen from theirs.
A fingerprinting technique, dataset inference (DI), has been shown to offer better robustness and efficiency than prior methods.
arXiv Detail & Related papers (2022-10-24T22:17:55Z) - Are You Stealing My Model? Sample Correlation for Fingerprinting Deep
Neural Networks [86.55317144826179]
Previous methods always leverage the transferable adversarial examples as the model fingerprint.
We propose a novel yet simple model stealing detection method based on SAmple Correlation (SAC)
SAC successfully defends against various model stealing attacks, even including adversarial training or transfer learning.
arXiv Detail & Related papers (2022-10-21T02:07:50Z) - CrowdGuard: Federated Backdoor Detection in Federated Learning [39.58317527488534]
This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in Federated Learning.
CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback.
The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios.
arXiv Detail & Related papers (2022-10-14T11:27:49Z) - Fingerprinting Deep Neural Networks Globally via Universal Adversarial
Perturbations [22.89321897726347]
We propose a novel and practical mechanism which enables the service provider to verify whether a suspect model is stolen from the victim model.
Our framework can detect model IP breaches with confidence 99.99 %$ within only $20$ fingerprints of the suspect model.
arXiv Detail & Related papers (2022-02-17T11:29:50Z) - Defending against Model Stealing via Verifying Embedded External
Features [90.29429679125508]
adversaries can steal' deployed models even when they have no training samples and can not get access to the model parameters or structures.
We explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified emphexternal features.
Our method is effective in detecting different types of model stealing simultaneously, even if the stolen model is obtained via a multi-stage stealing process.
arXiv Detail & Related papers (2021-12-07T03:51:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.