GAN-based Domain Inference Attack
- URL: http://arxiv.org/abs/2212.11810v1
- Date: Thu, 22 Dec 2022 15:40:53 GMT
- Title: GAN-based Domain Inference Attack
- Authors: Yuechun Gu and Keke Chen
- Abstract summary: We propose a generative adversarial network (GAN) based method to explore likely or similar domains of a target model.
We find that the target model may distract the training procedure less if the domain is more similar to the target domain.
Our experiments show that the auxiliary dataset from an MDI top-ranked domain can effectively boost the result of model-inversion attacks.
- Score: 3.731168012111833
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Model-based attacks can infer training data information from deep neural
network models. These attacks heavily depend on the attacker's knowledge of the
application domain, e.g., using it to determine the auxiliary data for
model-inversion attacks. However, attackers may not know what the model is used
for in practice. We propose a generative adversarial network (GAN) based method
to explore likely or similar domains of a target model -- the model domain
inference (MDI) attack. For a given target (classification) model, we assume
that the attacker knows nothing but the input and output formats and can use
the model to derive the prediction for any input in the desired form. Our basic
idea is to use the target model to affect a GAN training process for a
candidate domain's dataset that is easy to obtain. We find that the target
model may distract the training procedure less if the domain is more similar to
the target domain. We then measure the distraction level with the distance
between GAN-generated datasets, which can be used to rank candidate domains for
the target model. Our experiments show that the auxiliary dataset from an MDI
top-ranked domain can effectively boost the result of model-inversion attacks.
Related papers
- Adaptive Domain Inference Attack [6.336458796079136]
Existing model-targeted attacks assume the attacker has known the application domain or training data distribution.
Can removing the domain information from model APIs protect models from these attacks?
A proposed adaptive domain inference attack (ADI) can still successfully estimate relevant subsets of training data.
arXiv Detail & Related papers (2023-12-22T22:04:13Z) - Transferable Attack for Semantic Segmentation [59.17710830038692]
adversarial attacks, and observe that the adversarial examples generated from a source model fail to attack the target models.
We propose an ensemble attack for semantic segmentation to achieve more effective attacks with higher transferability.
arXiv Detail & Related papers (2023-07-31T11:05:55Z) - DREAM: Domain-free Reverse Engineering Attributes of Black-box Model [51.37041886352823]
We propose a new problem of Domain-agnostic Reverse Engineering the Attributes of a black-box target model.
We learn a domain-agnostic model to infer the attributes of a target black-box model with unknown training data.
arXiv Detail & Related papers (2023-07-20T16:25:58Z) - Unstoppable Attack: Label-Only Model Inversion via Conditional Diffusion
Model [14.834360664780709]
Model attacks (MIAs) aim to recover private data from inaccessible training sets of deep learning models.
This paper develops a novel MIA method, leveraging a conditional diffusion model (CDM) to recover representative samples under the target label.
Experimental results show that this method can generate similar and accurate samples to the target label, outperforming generators of previous approaches.
arXiv Detail & Related papers (2023-07-17T12:14:24Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - Label-only Model Inversion Attack: The Attack that Requires the Least
Information [14.061083728194378]
In a model inversion attack, an adversary attempts to reconstruct the data records, used to train a target model, using only the model's output.
We have found a model inversion method that can reconstruct the input data records based only on the output labels.
arXiv Detail & Related papers (2022-03-13T03:03:49Z) - Cross Domain Few-Shot Learning via Meta Adversarial Training [34.383449283927014]
Few-shot relation classification (RC) is one of the critical problems in machine learning.
We present a novel model that takes into consideration the afore-mentioned cross-domain situation.
A meta-based adversarial training framework is proposed to fine-tune the trained networks for adapting to data from the target domain.
arXiv Detail & Related papers (2022-02-11T15:52:29Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z) - Domain Adaptation Using Class Similarity for Robust Speech Recognition [24.951852740214413]
This paper proposes a novel adaptation method for deep neural network (DNN) acoustic model using class similarity.
Experiments showed that our approach outperforms fine-tuning using one-hot labels on both accent and noise adaptation task.
arXiv Detail & Related papers (2020-11-05T12:26:43Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.