Non-Transferable Learning: A New Approach for Model Verification and
Authorization
- URL: http://arxiv.org/abs/2106.06916v1
- Date: Sun, 13 Jun 2021 04:57:16 GMT
- Title: Non-Transferable Learning: A New Approach for Model Verification and
Authorization
- Authors: Lixu Wang, Shichao Xu, Ruiqi Xu, Xiao Wang, Qi Zhu
- Abstract summary: There are two common protection methods: ownership verification and usage authorization.
We propose Non-Transferable Learning (NTL), a novel approach that captures the exclusive data representation in the learned model.
Our NTL-based authorization approach provides data-centric usage protection by significantly degrading the performance of usage on unauthorized data.
- Score: 7.686781778077341
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: As Artificial Intelligence as a Service gains popularity, protecting
well-trained models as intellectual property is becoming increasingly
important. Generally speaking, there are two common protection methods:
ownership verification and usage authorization. In this paper, we propose
Non-Transferable Learning (NTL), a novel approach that captures the exclusive
data representation in the learned model and restricts the model generalization
ability to certain domains. This approach provides effective solutions to both
model verification and authorization. For ownership verification, watermarking
techniques are commonly used but are often vulnerable to sophisticated
watermark removal methods. Our NTL-based model verification approach instead
provides robust resistance to state-of-the-art watermark removal methods, as
shown in extensive experiments for four of such methods over the digits,
CIFAR10 & STL10, and VisDA datasets. For usage authorization, prior solutions
focus on authorizing specific users to use the model, but authorized users can
still apply the model to any data without restriction. Our NTL-based
authorization approach instead provides data-centric usage protection by
significantly degrading the performance of usage on unauthorized data. Its
effectiveness is also shown through experiments on a variety of datasets.
Related papers
- A Novel Access Control and Privacy-Enhancing Approach for Models in Edge Computing [0.26107298043931193]
We propose a novel model access control method tailored for edge computing environments.
This method leverages image style as a licensing mechanism, embedding style recognition into the model's operational framework.
By restricting the input data to the edge model, this approach not only prevents attackers from gaining unauthorized access to the model but also enhances the privacy of data on terminal devices.
arXiv Detail & Related papers (2024-11-06T11:37:30Z) - Non-transferable Pruning [5.690414273625171]
Pretrained Deep Neural Networks (DNNs) are increasingly recognized as valuable intellectual property (IP)
To safeguard these models against IP infringement, strategies for ownership verification and usage authorization have emerged.
We propose Non-Transferable Pruning (NTP), a novel IP protection method that leverages model pruning to control a pretrained DNN's transferability to unauthorized data domains.
arXiv Detail & Related papers (2024-10-10T15:10:09Z) - EnTruth: Enhancing the Traceability of Unauthorized Dataset Usage in Text-to-image Diffusion Models with Minimal and Robust Alterations [73.94175015918059]
We introduce a novel approach, EnTruth, which Enhances Traceability of unauthorized dataset usage.
By strategically incorporating the template memorization, EnTruth can trigger the specific behavior in unauthorized models as the evidence of infringement.
Our method is the first to investigate the positive application of memorization and use it for copyright protection, which turns a curse into a blessing.
arXiv Detail & Related papers (2024-06-20T02:02:44Z) - Generative Models are Self-Watermarked: Declaring Model Authentication
through Re-Generation [17.88043926057354]
verifying data ownership poses formidable challenges, particularly in cases of unauthorized reuse of generated data.
Our work is dedicated to detecting data reuse from even an individual sample.
We propose an explainable verification procedure that attributes data ownership through re-generation, and further amplifies these fingerprints in the generative models through iterative data re-generation.
arXiv Detail & Related papers (2024-02-23T10:48:21Z) - Double-I Watermark: Protecting Model Copyright for LLM Fine-tuning [45.09125828947013]
The proposed approach effectively injects specific watermarking information into the customized model during fine-tuning.
We evaluate the proposed "Double-I watermark" under various fine-tuning methods, demonstrating its harmlessness, robustness, uniqueness, imperceptibility, and validity through both quantitative and qualitative analyses.
arXiv Detail & Related papers (2024-02-22T04:55:14Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Black-box Dataset Ownership Verification via Backdoor Watermarking [67.69308278379957]
We formulate the protection of released datasets as verifying whether they are adopted for training a (suspicious) third-party model.
We propose to embed external patterns via backdoor watermarking for the ownership verification to protect them.
Specifically, we exploit poison-only backdoor attacks ($e.g.$, BadNets) for dataset watermarking and design a hypothesis-test-guided method for dataset verification.
arXiv Detail & Related papers (2022-08-04T05:32:20Z) - Enhancing the Generalization for Intent Classification and Out-of-Domain
Detection in SLU [70.44344060176952]
Intent classification is a major task in spoken language understanding (SLU)
Recent works have shown that using extra data and labels can improve the OOD detection performance.
This paper proposes to train a model with only IND data while supporting both IND intent classification and OOD detection.
arXiv Detail & Related papers (2021-06-28T08:27:38Z) - Removing Backdoor-Based Watermarks in Neural Networks with Limited Data [26.050649487499626]
Trading deep models is highly demanded and lucrative nowadays.
naive trading schemes typically involve potential risks related to copyright and trustworthiness issues.
We propose a novel backdoor-based watermark removal framework using limited data, dubbed WILD.
arXiv Detail & Related papers (2020-08-02T06:25:26Z) - Towards Inheritable Models for Open-Set Domain Adaptation [56.930641754944915]
We introduce a practical Domain Adaptation paradigm where a source-trained model is used to facilitate adaptation in the absence of the source dataset in future.
We present an objective way to quantify inheritability to enable the selection of the most suitable source model for a given target domain, even in the absence of the source data.
arXiv Detail & Related papers (2020-04-09T07:16:30Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.