Manipulating Transfer Learning for Property Inference
- URL: http://arxiv.org/abs/2303.11643v1
- Date: Tue, 21 Mar 2023 07:32:32 GMT
- Title: Manipulating Transfer Learning for Property Inference
- Authors: Yulong Tian, Fnu Suya, Anshuman Suri, Fengyuan Xu, David Evans
- Abstract summary: Transfer learning is a popular method for tuning pretrained (upstream) models for different downstream tasks.
We study how an adversary with control over an upstream model used in transfer learning can conduct property inference attacks on a victim's tuned downstream model.
- Score: 12.832337149563088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning is a popular method for tuning pretrained (upstream) models
for different downstream tasks using limited data and computational resources.
We study how an adversary with control over an upstream model used in transfer
learning can conduct property inference attacks on a victim's tuned downstream
model. For example, to infer the presence of images of a specific individual in
the downstream training set. We demonstrate attacks in which an adversary can
manipulate the upstream model to conduct highly effective and specific property
inference attacks (AUC score $> 0.9$), without incurring significant
performance loss on the main task. The main idea of the manipulation is to make
the upstream model generate activations (intermediate features) with different
distributions for samples with and without a target property, thus enabling the
adversary to distinguish easily between downstream models trained with and
without training examples that have the target property. Our code is available
at https://github.com/yulongt23/Transfer-Inference.
Related papers
- Improving Location-based Thermal Emission Side-Channel Analysis Using Iterative Transfer Learning [3.5459927850418116]
This paper proposes the use of iterative transfer learning applied to deep learning models for side-channel attacks.
Experimental results show that when using thermal or power consumption map images as input, our method improves average performance.
arXiv Detail & Related papers (2024-12-30T15:56:34Z) - Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers [95.22517830759193]
This paper studies the transferability of such an adversarial vulnerability from a pre-trained ViT model to downstream tasks.
We show that DTA achieves an average attack success rate (ASR) exceeding 90%, surpassing existing methods by a huge margin.
arXiv Detail & Related papers (2024-08-03T08:07:03Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - On the Connection between Pre-training Data Diversity and Fine-tuning
Robustness [66.30369048726145]
We find that the primary factor influencing downstream effective robustness is data quantity.
We demonstrate our findings on pre-training distributions drawn from various natural and synthetic data sources.
arXiv Detail & Related papers (2023-07-24T05:36:19Z) - Beyond Transfer Learning: Co-finetuning for Action Localisation [64.07196901012153]
We propose co-finetuning -- simultaneously training a single model on multiple upstream'' and downstream'' tasks.
We demonstrate that co-finetuning outperforms traditional transfer learning when using the same total amount of data.
We also show how we can easily extend our approach to multiple upstream'' datasets to further improve performance.
arXiv Detail & Related papers (2022-07-08T10:25:47Z) - How Well Do Sparse Imagenet Models Transfer? [75.98123173154605]
Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" datasets.
In this work, we perform an in-depth investigation of this phenomenon in the context of convolutional neural networks (CNNs) trained on the ImageNet dataset.
We show that sparse models can match or even outperform the transfer performance of dense models, even at high sparsities.
arXiv Detail & Related papers (2021-11-26T11:58:51Z) - Deep Ensembles for Low-Data Transfer Learning [21.578470914935938]
We study different ways of creating ensembles from pre-trained models.
We show that the nature of pre-training itself is a performant source of diversity.
We propose a practical algorithm that efficiently identifies a subset of pre-trained models for any downstream dataset.
arXiv Detail & Related papers (2020-10-14T07:59:00Z) - Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer
Learning [60.784641458579124]
We show that fine-tuning effectively enhances model robustness under white-box FGSM attacks.
We also propose a black-box attack method for transfer learning models which attacks the target model with the adversarial examples produced by its source model.
To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model.
arXiv Detail & Related papers (2020-08-25T15:04:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.