You are caught stealing my winning lottery ticket! Making a lottery
ticket claim its ownership
- URL: http://arxiv.org/abs/2111.00162v1
- Date: Sat, 30 Oct 2021 03:38:38 GMT
- Title: You are caught stealing my winning lottery ticket! Making a lottery
ticket claim its ownership
- Authors: Xuxi Chen, Tianlong Chen, Zhenyu Zhang, Zhangyang Wang
- Abstract summary: Lottery ticket hypothesis (LTH) emerges as a promising framework to leverage a special sparse subnetwork.
Main resource bottleneck of LTH is however the extraordinary cost to find the sparse mask of the winning ticket.
Our setting adds a new dimension to the recently soaring interest in protecting against the intellectual property infringement of deep models.
- Score: 87.13642800792077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite tremendous success in many application scenarios, the training and
inference costs of using deep learning are also rapidly increasing over time.
The lottery ticket hypothesis (LTH) emerges as a promising framework to
leverage a special sparse subnetwork (i.e., winning ticket) instead of a full
model for both training and inference, that can lower both costs without
sacrificing the performance. The main resource bottleneck of LTH is however the
extraordinary cost to find the sparse mask of the winning ticket. That makes
the found winning ticket become a valuable asset to the owners, highlighting
the necessity of protecting its copyright. Our setting adds a new dimension to
the recently soaring interest in protecting against the intellectual property
(IP) infringement of deep models and verifying their ownerships, since they
take owners' massive/unique resources to develop or train. While existing
methods explored encrypted weights or predictions, we investigate a unique way
to leverage sparse topological information to perform lottery verification, by
developing several graph-based signatures that can be embedded as credentials.
By further combining trigger set-based methods, our proposal can work in both
white-box and black-box verification scenarios. Through extensive experiments,
we demonstrate the effectiveness of lottery verification in diverse models
(ResNet-20, ResNet-18, ResNet-50) on CIFAR-10 and CIFAR-100. Specifically, our
verification is shown to be robust to removal attacks such as model fine-tuning
and pruning, as well as several ambiguity attacks. Our codes are available at
https://github.com/VITA-Group/NO-stealing-LTH.
Related papers
- Can We Find Strong Lottery Tickets in Generative Models? [24.405555822170896]
We find strong lottery tickets in generative models that achieve good generative performance without any weight update.
To the best of our knowledge, we are the first to show the existence of strong lottery tickets in generative models and provide an algorithm to find it.
arXiv Detail & Related papers (2022-12-16T07:20:28Z) - Robust Lottery Tickets for Pre-trained Language Models [57.14316619360376]
We propose a novel method based on learning binary weight masks to identify robust tickets hidden in the original language models.
Experimental results show the significant improvement of the proposed method over previous work on adversarial robustness evaluation.
arXiv Detail & Related papers (2022-11-06T02:59:27Z) - Lottery Jackpots Exist in Pre-trained Models [69.17690253938211]
We show that high-performing and sparse sub-networks without the involvement of weight training, termed "lottery jackpots", exist in pre-trained models with unexpanded width.
We propose a novel short restriction method to restrict change of masks that may have potential negative impacts on the training loss.
arXiv Detail & Related papers (2021-04-18T03:50:28Z) - The Elastic Lottery Ticket Hypothesis [106.79387235014379]
Lottery Ticket Hypothesis raises keen attention to identifying sparse trainableworks or winning tickets.
The most effective method to identify such winning tickets is still Iterative Magnitude-based Pruning.
We propose a variety of strategies to tweak the winning tickets found from different networks of the same model family.
arXiv Detail & Related papers (2021-03-30T17:53:45Z) - Lottery Ticket Implies Accuracy Degradation, Is It a Desirable
Phenomenon? [43.47794674403988]
In deep model compression, the recent finding "Lottery Ticket Hypothesis" (LTH) (Frankle & Carbin) pointed out that there could exist a winning ticket.
We investigate the underlying condition and rationale behind the winning property, and find that the underlying reason is largely attributed to the correlation between weights and final-trained weights.
We propose the "pruning & fine-tuning" method that consistently outperforms lottery ticket sparse training.
arXiv Detail & Related papers (2021-02-19T14:49:46Z) - Good Students Play Big Lottery Better [84.6111281091602]
Lottery ticket hypothesis suggests that a dense neural network contains a sparse sub-network that can match the test accuracy of the original dense net.
Recent studies demonstrate that a sparse sub-network can still be obtained by using a rewinding technique.
This paper proposes a new, simpler and yet powerful technique for re-training the sub-network, called "Knowledge Distillation ticket" (KD ticket)
arXiv Detail & Related papers (2021-01-08T23:33:53Z) - Winning Lottery Tickets in Deep Generative Models [64.79920299421255]
We show the existence of winning tickets in deep generative models such as GANs and VAEs.
We also demonstrate the transferability of winning tickets across different generative models.
arXiv Detail & Related papers (2020-10-05T21:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.