AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive
Learning
- URL: http://arxiv.org/abs/2208.03948v1
- Date: Mon, 8 Aug 2022 07:23:37 GMT
- Title: AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive
Learning
- Authors: Tianxing Zhang, Hanzhou Wu, Xiaofeng Lu and Guangling Sun
- Abstract summary: We introduce AWEncoder, an adversarial method for watermarking the pre-trained encoder in contrastive learning.
The proposed work enjoys pretty good effectiveness and robustness on different contrastive learning algorithms and downstream tasks.
- Score: 18.90841192412555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a self-supervised learning paradigm, contrastive learning has been widely
used to pre-train a powerful encoder as an effective feature extractor for
various downstream tasks. This process requires numerous unlabeled training
data and computational resources, which makes the pre-trained encoder become
valuable intellectual property of the owner. However, the lack of a priori
knowledge of downstream tasks makes it non-trivial to protect the intellectual
property of the pre-trained encoder by applying conventional watermarking
methods. To deal with this problem, in this paper, we introduce AWEncoder, an
adversarial method for watermarking the pre-trained encoder in contrastive
learning. First, as an adversarial perturbation, the watermark is generated by
enforcing the training samples to be marked to deviate respective location and
surround a randomly selected key image in the embedding space. Then, the
watermark is embedded into the pre-trained encoder by further optimizing a
joint loss function. As a result, the watermarked encoder not only performs
very well for downstream tasks, but also enables us to verify its ownership by
analyzing the discrepancy of output provided using the encoder as the backbone
under both white-box and black-box conditions. Extensive experiments
demonstrate that the proposed work enjoys pretty good effectiveness and
robustness on different contrastive learning algorithms and downstream tasks,
which has verified the superiority and applicability of the proposed work.
Related papers
- How to Make Cross Encoder a Good Teacher for Efficient Image-Text Retrieval? [99.87554379608224]
Cross-modal similarity score distribution of cross-encoder is more concentrated while the result of dual-encoder is nearly normal.
Only the relative order between hard negatives conveys valid knowledge while the order information between easy negatives has little significance.
We propose a novel Contrastive Partial Ranking Distillation (DCPR) method which implements the objective of mimicking relative order between hard negative samples with contrastive learning.
arXiv Detail & Related papers (2024-07-10T09:10:01Z) - Downstream-agnostic Adversarial Examples [66.8606539786026]
AdvEncoder is first framework for generating downstream-agnostic universal adversarial examples based on pre-trained encoder.
Unlike traditional adversarial example works, the pre-trained encoder only outputs feature vectors rather than classification labels.
Our results show that an attacker can successfully attack downstream tasks without knowing either the pre-training dataset or the downstream dataset.
arXiv Detail & Related papers (2023-07-23T10:16:47Z) - Think Twice before Driving: Towards Scalable Decoders for End-to-End
Autonomous Driving [74.28510044056706]
Existing methods usually adopt the decoupled encoder-decoder paradigm.
In this work, we aim to alleviate the problem by two principles.
We first predict a coarse-grained future position and action based on the encoder features.
Then, conditioned on the position and action, the future scene is imagined to check the ramification if we drive accordingly.
arXiv Detail & Related papers (2023-05-10T15:22:02Z) - Transfer Learning for Segmentation Problems: Choose the Right Encoder
and Skip the Decoder [0.0]
It is common practice to reuse models initially trained on different data to increase downstream task performance.
In this work, we investigate the impact of transfer learning for segmentation problems, being pixel-wise classification problems.
We find that transfer learning the decoder does not help downstream segmentation tasks, while transfer learning the encoder is truly beneficial.
arXiv Detail & Related papers (2022-07-29T07:02:05Z) - PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
Contrastive Learning [69.70602220716718]
We propose PoisonedEncoder, a data poisoning attack to contrastive learning.
In particular, an attacker injects carefully crafted poisoning inputs into the unlabeled pre-training data.
We evaluate five defenses against PoisonedEncoder, including one pre-processing, three in-processing, and one post-processing defenses.
arXiv Detail & Related papers (2022-05-13T00:15:44Z) - Watermarking Pre-trained Encoders in Contrastive Learning [9.23485246108653]
The pre-trained encoders are an important intellectual property that needs to be carefully protected.
It is challenging to migrate existing watermarking techniques from the classification tasks to the contrastive learning scenario.
We introduce a task-agnostic loss function to effectively embed into the encoder a backdoor as the watermark.
arXiv Detail & Related papers (2022-01-20T15:14:31Z) - StolenEncoder: Stealing Pre-trained Encoders [62.02156378126672]
We propose the first attack called StolenEncoder to steal pre-trained image encoders.
Our results show that the encoders stolen by StolenEncoder have similar functionality with the target encoders.
arXiv Detail & Related papers (2022-01-15T17:04:38Z) - EncoderMI: Membership Inference against Pre-trained Encoders in
Contrastive Learning [27.54202989524394]
We proposeMI, the first membership inference method against image encoders pre-trained by contrastive learning.
We evaluateMI on image encoders pre-trained on multiple datasets by ourselves as well as the Contrastive Language-Image Pre-training (CLIP) image encoder, which is pre-trained on 400 million (image, text) pairs collected from the Internet and released by OpenAI.
arXiv Detail & Related papers (2021-08-25T03:00:45Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.