Watermarking Pre-trained Encoders in Contrastive Learning
- URL: http://arxiv.org/abs/2201.08217v1
- Date: Thu, 20 Jan 2022 15:14:31 GMT
- Title: Watermarking Pre-trained Encoders in Contrastive Learning
- Authors: Yutong Wu, Han Qiu, Tianwei Zhang, Jiwei L, Meikang Qiu
- Abstract summary: The pre-trained encoders are an important intellectual property that needs to be carefully protected.
It is challenging to migrate existing watermarking techniques from the classification tasks to the contrastive learning scenario.
We introduce a task-agnostic loss function to effectively embed into the encoder a backdoor as the watermark.
- Score: 9.23485246108653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning has become a popular technique to pre-train image
encoders, which could be used to build various downstream classification models
in an efficient way. This process requires a large amount of data and
computation resources. Hence, the pre-trained encoders are an important
intellectual property that needs to be carefully protected. It is challenging
to migrate existing watermarking techniques from the classification tasks to
the contrastive learning scenario, as the owner of the encoder lacks the
knowledge of the downstream tasks which will be developed from the encoder in
the future. We propose the \textit{first} watermarking methodology for the
pre-trained encoders. We introduce a task-agnostic loss function to effectively
embed into the encoder a backdoor as the watermark. This backdoor can still
exist in any downstream models transferred from the encoder. Extensive
evaluations over different contrastive learning algorithms, datasets, and
downstream tasks indicate our watermarks exhibit high effectiveness and
robustness against different adversarial operations.
Related papers
- Regress Before Construct: Regress Autoencoder for Point Cloud
Self-supervised Learning [18.10704604275133]
Masked Autoencoders (MAE) have demonstrated promising performance in self-supervised learning for 2D and 3D computer vision.
We propose Point Regress AutoEncoder (Point-RAE), a new scheme for regressive autoencoders for point cloud self-supervised learning.
Our approach is efficient during pre-training and generalizes well on various downstream tasks.
arXiv Detail & Related papers (2023-09-25T17:23:33Z) - Downstream-agnostic Adversarial Examples [66.8606539786026]
AdvEncoder is first framework for generating downstream-agnostic universal adversarial examples based on pre-trained encoder.
Unlike traditional adversarial example works, the pre-trained encoder only outputs feature vectors rather than classification labels.
Our results show that an attacker can successfully attack downstream tasks without knowing either the pre-training dataset or the downstream dataset.
arXiv Detail & Related papers (2023-07-23T10:16:47Z) - Think Twice before Driving: Towards Scalable Decoders for End-to-End
Autonomous Driving [74.28510044056706]
Existing methods usually adopt the decoupled encoder-decoder paradigm.
In this work, we aim to alleviate the problem by two principles.
We first predict a coarse-grained future position and action based on the encoder features.
Then, conditioned on the position and action, the future scene is imagined to check the ramification if we drive accordingly.
arXiv Detail & Related papers (2023-05-10T15:22:02Z) - AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive
Learning [18.90841192412555]
We introduce AWEncoder, an adversarial method for watermarking the pre-trained encoder in contrastive learning.
The proposed work enjoys pretty good effectiveness and robustness on different contrastive learning algorithms and downstream tasks.
arXiv Detail & Related papers (2022-08-08T07:23:37Z) - Transfer Learning for Segmentation Problems: Choose the Right Encoder
and Skip the Decoder [0.0]
It is common practice to reuse models initially trained on different data to increase downstream task performance.
In this work, we investigate the impact of transfer learning for segmentation problems, being pixel-wise classification problems.
We find that transfer learning the decoder does not help downstream segmentation tasks, while transfer learning the encoder is truly beneficial.
arXiv Detail & Related papers (2022-07-29T07:02:05Z) - PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
Contrastive Learning [69.70602220716718]
We propose PoisonedEncoder, a data poisoning attack to contrastive learning.
In particular, an attacker injects carefully crafted poisoning inputs into the unlabeled pre-training data.
We evaluate five defenses against PoisonedEncoder, including one pre-processing, three in-processing, and one post-processing defenses.
arXiv Detail & Related papers (2022-05-13T00:15:44Z) - SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained
Encoders [9.070481370120905]
We propose SSLGuard, the first watermarking algorithm for pre-trained encoders.
SSLGuard is effective in watermark injection and verification, and is robust against model stealing and other watermark removal attacks.
arXiv Detail & Related papers (2022-01-27T17:41:54Z) - StolenEncoder: Stealing Pre-trained Encoders [62.02156378126672]
We propose the first attack called StolenEncoder to steal pre-trained image encoders.
Our results show that the encoders stolen by StolenEncoder have similar functionality with the target encoders.
arXiv Detail & Related papers (2022-01-15T17:04:38Z) - Don't Forget to Sign the Gradients! [60.98885980669777]
GradSigns is a novel watermarking framework for deep neural networks (DNNs)
We present GradSigns, a novel watermarking framework for deep neural networks (DNNs)
arXiv Detail & Related papers (2021-03-05T14:24:32Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.