MAP: MAsk-Pruning for Source-Free Model Intellectual Property Protection
- URL: http://arxiv.org/abs/2403.04149v1
- Date: Thu, 7 Mar 2024 02:10:59 GMT
- Title: MAP: MAsk-Pruning for Source-Free Model Intellectual Property Protection
- Authors: Boyang Peng, Sanqing Qu, Yong Wu, Tianpei Zou, Lianghua He, Alois
Knoll, Guang Chen, changjun jiang
- Abstract summary: MAsk Pruning (MAP) is a framework for locating and pruning target-related parameters in a well-trained model.
MAP freezes the source model and learns a target-specific binary mask to prevent unauthorized data usage.
Extensive experiments indicate that MAP yields new state-of-the-art performance.
- Score: 18.99205251538783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has achieved remarkable progress in various applications,
heightening the importance of safeguarding the intellectual property (IP) of
well-trained models. It entails not only authorizing usage but also ensuring
the deployment of models in authorized data domains, i.e., making models
exclusive to certain target domains. Previous methods necessitate concurrent
access to source training data and target unauthorized data when performing IP
protection, making them risky and inefficient for decentralized private data.
In this paper, we target a practical setting where only a well-trained source
model is available and investigate how we can realize IP protection. To achieve
this, we propose a novel MAsk Pruning (MAP) framework. MAP stems from an
intuitive hypothesis, i.e., there are target-related parameters in a
well-trained model, locating and pruning them is the key to IP protection.
Technically, MAP freezes the source model and learns a target-specific binary
mask to prevent unauthorized data usage while minimizing performance
degradation on authorized data. Moreover, we introduce a new metric aimed at
achieving a better balance between source and target performance degradation.
To verify the effectiveness and versatility, we have evaluated MAP in a variety
of scenarios, including vanilla source-available, practical source-free, and
challenging data-free. Extensive experiments indicate that MAP yields new
state-of-the-art performance.
Related papers
- Non-transferable Pruning [5.690414273625171]
Pretrained Deep Neural Networks (DNNs) are increasingly recognized as valuable intellectual property (IP)
To safeguard these models against IP infringement, strategies for ownership verification and usage authorization have emerged.
We propose Non-Transferable Pruning (NTP), a novel IP protection method that leverages model pruning to control a pretrained DNN's transferability to unauthorized data domains.
arXiv Detail & Related papers (2024-10-10T15:10:09Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - Improving Robustness to Model Inversion Attacks via Mutual Information
Regularization [12.079281416410227]
This paper studies defense mechanisms against model inversion (MI) attacks.
MI is a type of privacy attacks aimed at inferring information about the training data distribution given the access to a target machine learning model.
We propose the Mutual Information Regularization based Defense (MID) against MI attacks.
arXiv Detail & Related papers (2020-09-11T06:02:44Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.