Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services
- URL: http://arxiv.org/abs/2408.02814v1
- Date: Mon, 5 Aug 2024 20:27:54 GMT
- Title: Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services
- Authors: Shaopeng Fu, Xuexue Sun, Ke Qing, Tianhang Zheng, Di Wang,
- Abstract summary: Pre-trained encoders can be easily accessed online to build downstream machine learning (ML) services quickly.
This paper unveils a new vulnerability: the Pre-trained Inference (PEI) attack, which posts privacy threats toward encoders hidden behind downstream ML services.
- Score: 10.367966878807714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though pre-trained encoders can be easily accessed online to build downstream machine learning (ML) services quickly, various attacks have been designed to compromise the security and privacy of these encoders. While most attacks target encoders on the upstream side, it remains unknown how an encoder could be threatened when deployed in a downstream ML service. This paper unveils a new vulnerability: the Pre-trained Encoder Inference (PEI) attack, which posts privacy threats toward encoders hidden behind downstream ML services. By only providing API accesses to a targeted downstream service and a set of candidate encoders, the PEI attack can infer which encoder is secretly used by the targeted service based on candidate ones. We evaluate the attack performance of PEI against real-world encoders on three downstream tasks: image classification, text classification, and text-to-image generation. Experiments show that the PEI attack succeeds in revealing the hidden encoder in most cases and seldom makes mistakes even when the hidden encoder is not in the candidate set. We also conducted a case study on one of the most recent vision-language models, LLaVA, to illustrate that the PEI attack is useful in assisting other ML attacks such as adversarial attacks. The code is available at https://github.com/fshp971/encoder-inference.
Related papers
- DeDe: Detecting Backdoor Samples for SSL Encoders via Decoders [6.698677477097004]
Self-supervised learning (SSL) is pervasively exploited in training high-quality upstream encoders with a large amount of unlabeled data.
backdoor attacks merely via polluting a small portion of training data.
We propose a novel detection mechanism, DeDe, which detects the activation of the backdoor mapping with the cooccurrence of victim encoder and trigger inputs.
arXiv Detail & Related papers (2024-11-25T07:26:22Z) - GhostEncoder: Stealthy Backdoor Attacks with Dynamic Triggers to
Pre-trained Encoders in Self-supervised Learning [15.314217530697928]
Self-supervised learning (SSL) pertains to training pre-trained image encoders utilizing a substantial quantity of unlabeled images.
We propose GhostEncoder, the first dynamic invisible backdoor attack on SSL.
arXiv Detail & Related papers (2023-10-01T09:39:27Z) - Downstream-agnostic Adversarial Examples [66.8606539786026]
AdvEncoder is first framework for generating downstream-agnostic universal adversarial examples based on pre-trained encoder.
Unlike traditional adversarial example works, the pre-trained encoder only outputs feature vectors rather than classification labels.
Our results show that an attacker can successfully attack downstream tasks without knowing either the pre-training dataset or the downstream dataset.
arXiv Detail & Related papers (2023-07-23T10:16:47Z) - REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust
Encoder as a Service [67.0982378001551]
We show how a service provider pre-trains an encoder and then deploys it as a cloud service API.
A client queries the cloud service API to obtain feature vectors for its training/testing inputs.
We show that the cloud service only needs to provide two APIs to enable a client to certify the robustness of its downstream classifier.
arXiv Detail & Related papers (2023-01-07T17:40:11Z) - Pre-trained Encoders in Self-Supervised Learning Improve Secure and
Privacy-preserving Supervised Learning [63.45532264721498]
Self-supervised learning is an emerging technique to pre-train encoders using unlabeled data.
We perform first systematic, principled measurement study to understand whether and when a pretrained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms.
arXiv Detail & Related papers (2022-12-06T21:35:35Z) - CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive
Learning [71.25518220297639]
Contrastive learning pre-trains general-purpose encoders using an unlabeled pre-training dataset.
DPBAs inject poisoned inputs into the pre-training dataset so the encoder is backdoored.
CorruptEncoder introduces a new attack strategy to create poisoned inputs and uses a theory-guided method to maximize attack effectiveness.
Our results show that our defense can reduce the effectiveness of DPBAs, but it sacrifices the utility of the encoder, highlighting the need for new defenses.
arXiv Detail & Related papers (2022-11-15T15:48:28Z) - PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
Contrastive Learning [69.70602220716718]
We propose PoisonedEncoder, a data poisoning attack to contrastive learning.
In particular, an attacker injects carefully crafted poisoning inputs into the unlabeled pre-training data.
We evaluate five defenses against PoisonedEncoder, including one pre-processing, three in-processing, and one post-processing defenses.
arXiv Detail & Related papers (2022-05-13T00:15:44Z) - StolenEncoder: Stealing Pre-trained Encoders [62.02156378126672]
We propose the first attack called StolenEncoder to steal pre-trained image encoders.
Our results show that the encoders stolen by StolenEncoder have similar functionality with the target encoders.
arXiv Detail & Related papers (2022-01-15T17:04:38Z) - BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
Learning [29.113263683850015]
Self-supervised learning in computer vision aims to pre-train an image encoder using a large amount of unlabeled images or (image, text) pairs.
We propose BadEncoder, the first backdoor attack to self-supervised learning.
arXiv Detail & Related papers (2021-08-01T02:22:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.