Unsupervised Non-transferable Text Classification
- URL: http://arxiv.org/abs/2210.12651v1
- Date: Sun, 23 Oct 2022 08:15:43 GMT
- Title: Unsupervised Non-transferable Text Classification
- Authors: Guangtao Zeng and Wei Lu
- Abstract summary: We propose a novel unsupervised non-transferable learning method for the text classification task.
We introduce a secret key component in our approach for recovering the access to the target domain.
- Score: 8.077841946617472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training a good deep learning model requires substantial data and computing
resources, which makes the resulting neural model a valuable intellectual
property. To prevent the neural network from being undesirably exploited,
non-transferable learning has been proposed to reduce the model generalization
ability in specific target domains. However, existing approaches require
labeled data for the target domain which can be difficult to obtain.
Furthermore, they do not have the mechanism to still recover the model's
ability to access the target domain. In this paper, we propose a novel
unsupervised non-transferable learning method for the text classification task
that does not require annotated target domain data. We further introduce a
secret key component in our approach for recovering the access to the target
domain, where we design both an explicit and an implicit method for doing so.
Extensive experiments demonstrate the effectiveness of our approach.
Related papers
- Non-transferable Pruning [5.690414273625171]
Pretrained Deep Neural Networks (DNNs) are increasingly recognized as valuable intellectual property (IP)
To safeguard these models against IP infringement, strategies for ownership verification and usage authorization have emerged.
We propose Non-Transferable Pruning (NTP), a novel IP protection method that leverages model pruning to control a pretrained DNN's transferability to unauthorized data domains.
arXiv Detail & Related papers (2024-10-10T15:10:09Z) - A Curriculum-style Self-training Approach for Source-Free Semantic Segmentation [91.13472029666312]
We propose a curriculum-style self-training approach for source-free domain adaptive semantic segmentation.
Our method yields state-of-the-art performance on source-free semantic segmentation tasks for both synthetic-to-real and adverse conditions.
arXiv Detail & Related papers (2021-06-22T10:21:39Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Minimax Lower Bounds for Transfer Learning with Linear and One-hidden
Layer Neural Networks [27.44348371795822]
We develop a statistical minimax framework to characterize the limits of transfer learning.
We derive a lower-bound for the target generalization error achievable by any algorithm as a function of the number of labeled source and target data.
arXiv Detail & Related papers (2020-06-16T22:49:26Z) - Unsupervised Transfer Learning with Self-Supervised Remedy [60.315835711438936]
Generalising deep networks to novel domains without manual labels is challenging to deep learning.
Pre-learned knowledge does not transfer well without making strong assumptions about the learned and the novel domains.
In this work, we aim to learn a discriminative latent space of the unlabelled target data in a novel domain by knowledge transfer from labelled related domains.
arXiv Detail & Related papers (2020-06-08T16:42:17Z) - Universal Source-Free Domain Adaptation [57.37520645827318]
We propose a novel two-stage learning process for domain adaptation.
In the Procurement stage, we aim to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift.
In the Deployment stage, the goal is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps.
arXiv Detail & Related papers (2020-04-09T07:26:20Z) - Towards Inheritable Models for Open-Set Domain Adaptation [56.930641754944915]
We introduce a practical Domain Adaptation paradigm where a source-trained model is used to facilitate adaptation in the absence of the source dataset in future.
We present an objective way to quantify inheritability to enable the selection of the most suitable source model for a given target domain, even in the absence of the source data.
arXiv Detail & Related papers (2020-04-09T07:16:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.