SCME: A Self-Contrastive Method for Data-free and Query-Limited Model
Extraction Attack
- URL: http://arxiv.org/abs/2310.09792v1
- Date: Sun, 15 Oct 2023 10:41:45 GMT
- Title: SCME: A Self-Contrastive Method for Data-free and Query-Limited Model
Extraction Attack
- Authors: Renyang Liu, Jinhong Zhang, Kwok-Yan Lam, Jun Zhao, Wei Zhou
- Abstract summary: Model extraction attacks fool the target model by generating adversarial examples on a substitute model.
We propose a novel data-free model extraction method named SCME, which considers both the inter- and intra-class diversity in synthesizing fake data.
- Score: 18.998300969035885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous studies have revealed that artificial intelligence (AI) systems are
vulnerable to adversarial attacks. Among them, model extraction attacks fool
the target model by generating adversarial examples on a substitute model. The
core of such an attack is training a substitute model as similar to the target
model as possible, where the simulation process can be categorized in a
data-dependent and data-free manner. Compared with the data-dependent method,
the data-free one has been proven to be more practical in the real world since
it trains the substitute model with synthesized data. However, the distribution
of these fake data lacks diversity and cannot detect the decision boundary of
the target model well, resulting in the dissatisfactory simulation effect.
Besides, these data-free techniques need a vast number of queries to train the
substitute model, increasing the time and computing consumption and the risk of
exposure. To solve the aforementioned problems, in this paper, we propose a
novel data-free model extraction method named SCME (Self-Contrastive Model
Extraction), which considers both the inter- and intra-class diversity in
synthesizing fake data. In addition, SCME introduces the Mixup operation to
augment the fake data, which can explore the target model's decision boundary
effectively and improve the simulating capacity. Extensive experiments show
that the proposed method can yield diversified fake data. Moreover, our method
has shown superiority in many different attack settings under the query-limited
scenario, especially for untargeted attacks, the SCME outperforms SOTA methods
by 11.43\% on average for five baseline datasets.
Related papers
- On conditional diffusion models for PDE simulations [53.01911265639582]
We study score-based diffusion models for forecasting and assimilation of sparse observations.
We propose an autoregressive sampling approach that significantly improves performance in forecasting.
We also propose a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths.
arXiv Detail & Related papers (2024-10-21T18:31:04Z) - Towards a Theoretical Understanding of Memorization in Diffusion Models [76.85077961718875]
Diffusion probabilistic models (DPMs) are being employed as mainstream models for Generative Artificial Intelligence (GenAI)
We provide a theoretical understanding of memorization in both conditional and unconditional DPMs under the assumption of model convergence.
We propose a novel data extraction method named textbfSurrogate condItional Data Extraction (SIDE) that leverages a time-dependent classifier trained on the generated data as a surrogate condition to extract training data from unconditional DPMs.
arXiv Detail & Related papers (2024-10-03T13:17:06Z) - Model-Based Diffusion for Trajectory Optimization [8.943418808959494]
We introduce Model-Based Diffusion (MBD), an optimization approach using the diffusion process to solve trajectory optimization (TO) problems without data.
Although MBD does not require external data, it can be naturally integrated with data of diverse qualities to steer the diffusion process.
MBD outperforms state-of-the-art reinforcement learning and sampling-based TO methods in challenging contact-rich tasks.
arXiv Detail & Related papers (2024-05-28T22:14:25Z) - Self-Supervised Dataset Distillation for Transfer Learning [77.4714995131992]
We propose a novel problem of distilling an unlabeled dataset into a set of small synthetic samples for efficient self-supervised learning (SSL)
We first prove that a gradient of synthetic samples with respect to a SSL objective in naive bilevel optimization is textitbiased due to randomness originating from data augmentations or masking.
We empirically validate the effectiveness of our method on various applications involving transfer learning.
arXiv Detail & Related papers (2023-10-10T10:48:52Z) - OMG-ATTACK: Self-Supervised On-Manifold Generation of Transferable
Evasion Attacks [17.584752814352502]
Evasion Attacks (EA) are used to test the robustness of trained neural networks by distorting input data.
We introduce a self-supervised, computationally economical method for generating adversarial examples.
Our experiments consistently demonstrate the method is effective across various models, unseen data categories, and even defended models.
arXiv Detail & Related papers (2023-10-05T17:34:47Z) - AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models [1.8752655643513647]
XAI tools can increase the vulnerability of model extraction attacks, which is a concern when model owners prefer black-box access.
We propose a novel retraining (learning) based model extraction attack framework against interpretable models under black-box settings.
We show that AUTOLYCUS is highly effective, requiring significantly fewer queries compared to state-of-the-art attacks.
arXiv Detail & Related papers (2023-02-04T13:23:39Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z) - Data-Free Model Extraction [16.007030173299984]
Current model extraction attacks assume that the adversary has access to a surrogate dataset with characteristics similar to the proprietary data used to train the victim model.
We propose data-free model extraction methods that do not require a surrogate dataset.
We find that the proposed data-free model extraction approach achieves high-accuracy with reasonable query complexity.
arXiv Detail & Related papers (2020-11-30T13:37:47Z) - Model-based Policy Optimization with Unsupervised Model Adaptation [37.09948645461043]
We investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization.
We propose a novel model-based reinforcement learning framework AMPO, which introduces unsupervised model adaptation.
Our approach achieves state-of-the-art performance in terms of sample efficiency on a range of continuous control benchmark tasks.
arXiv Detail & Related papers (2020-10-19T14:19:42Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z) - DaST: Data-free Substitute Training for Adversarial Attacks [55.76371274622313]
We propose a data-free substitute training method (DaST) to obtain substitute models for adversarial black-box attacks.
To achieve this, DaST utilizes specially designed generative adversarial networks (GANs) to train the substitute models.
Experiments demonstrate the substitute models can achieve competitive performance compared with the baseline models.
arXiv Detail & Related papers (2020-03-28T04:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.