SSL-Auth: An Authentication Framework by Fragile Watermarking for
Pre-trained Encoders in Self-supervised Learning
- URL: http://arxiv.org/abs/2308.04673v3
- Date: Wed, 6 Dec 2023 08:23:34 GMT
- Title: SSL-Auth: An Authentication Framework by Fragile Watermarking for
Pre-trained Encoders in Self-supervised Learning
- Authors: Xiaobei Li, Changchun Yin, Liyue Zhu, Xiaogang Xu, Liming Fang, Run
Wang, Chenhao Lin
- Abstract summary: Self-supervised learning (SSL), a paradigm harnessing unlabeled datasets to train robust encoders, has recently witnessed substantial success.
Recent studies have shed light on vulnerabilities in pre-trained encoders, including backdoor and adversarial threats.
Safeguarding the intellectual property of encoder trainers and ensuring the trustworthiness of deployed encoders pose notable challenges in SSL.
We introduce SSL-Auth, the first authentication framework designed explicitly for pre-trained encoders.
- Score: 22.64707392046704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning (SSL), a paradigm harnessing unlabeled datasets to
train robust encoders, has recently witnessed substantial success. These
encoders serve as pivotal feature extractors for downstream tasks, demanding
significant computational resources. Nevertheless, recent studies have shed
light on vulnerabilities in pre-trained encoders, including backdoor and
adversarial threats. Safeguarding the intellectual property of encoder trainers
and ensuring the trustworthiness of deployed encoders pose notable challenges
in SSL. To bridge these gaps, we introduce SSL-Auth, the first authentication
framework designed explicitly for pre-trained encoders. SSL-Auth leverages
selected key samples and employs a well-trained generative network to
reconstruct watermark information, thus affirming the integrity of the encoder
without compromising its performance. By comparing the reconstruction outcomes
of the key samples, we can identify any malicious alterations. Comprehensive
evaluations conducted on a range of encoders and diverse downstream tasks
demonstrate the effectiveness of our proposed SSL-Auth.
Related papers
- Memorization in Self-Supervised Learning Improves Downstream Generalization [49.42010047574022]
Self-supervised learning (SSL) has recently received significant attention due to its ability to train high-performance encoders purely on unlabeled data.
We propose SSLMem, a framework for defining memorization within SSL.
arXiv Detail & Related papers (2024-01-19T11:32:47Z) - GhostEncoder: Stealthy Backdoor Attacks with Dynamic Triggers to
Pre-trained Encoders in Self-supervised Learning [15.314217530697928]
Self-supervised learning (SSL) pertains to training pre-trained image encoders utilizing a substantial quantity of unlabeled images.
We propose GhostEncoder, the first dynamic invisible backdoor attack on SSL.
arXiv Detail & Related papers (2023-10-01T09:39:27Z) - Downstream-agnostic Adversarial Examples [66.8606539786026]
AdvEncoder is first framework for generating downstream-agnostic universal adversarial examples based on pre-trained encoder.
Unlike traditional adversarial example works, the pre-trained encoder only outputs feature vectors rather than classification labels.
Our results show that an attacker can successfully attack downstream tasks without knowing either the pre-training dataset or the downstream dataset.
arXiv Detail & Related papers (2023-07-23T10:16:47Z) - Pre-trained Encoders in Self-Supervised Learning Improve Secure and
Privacy-preserving Supervised Learning [63.45532264721498]
Self-supervised learning is an emerging technique to pre-train encoders using unlabeled data.
We perform first systematic, principled measurement study to understand whether and when a pretrained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms.
arXiv Detail & Related papers (2022-12-06T21:35:35Z) - AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive
Learning [18.90841192412555]
We introduce AWEncoder, an adversarial method for watermarking the pre-trained encoder in contrastive learning.
The proposed work enjoys pretty good effectiveness and robustness on different contrastive learning algorithms and downstream tasks.
arXiv Detail & Related papers (2022-08-08T07:23:37Z) - A Survey on Masked Autoencoder for Self-supervised Learning in Vision
and Beyond [64.85076239939336]
Self-supervised learning (SSL) in vision might undertake a similar trajectory as in NLP.
generative pretext tasks with the masked prediction (e.g., BERT) have become a de facto standard SSL practice in NLP.
Success of mask image modeling has revived the masking autoencoder.
arXiv Detail & Related papers (2022-07-30T09:59:28Z) - Joint Encoder-Decoder Self-Supervised Pre-training for ASR [0.0]
Self-supervised learning has shown tremendous success in various speech-related downstream tasks.
In this paper, we propose a new paradigm that exploits the power of a decoder during self-supervised learning.
arXiv Detail & Related papers (2022-06-09T12:45:29Z) - SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained
Encoders [9.070481370120905]
We propose SSLGuard, the first watermarking algorithm for pre-trained encoders.
SSLGuard is effective in watermark injection and verification, and is robust against model stealing and other watermark removal attacks.
arXiv Detail & Related papers (2022-01-27T17:41:54Z) - StolenEncoder: Stealing Pre-trained Encoders [62.02156378126672]
We propose the first attack called StolenEncoder to steal pre-trained image encoders.
Our results show that the encoders stolen by StolenEncoder have similar functionality with the target encoders.
arXiv Detail & Related papers (2022-01-15T17:04:38Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.