Data-Free Model Extraction Attacks in the Context of Object Detection
- URL: http://arxiv.org/abs/2308.05127v1
- Date: Wed, 9 Aug 2023 06:23:54 GMT
- Title: Data-Free Model Extraction Attacks in the Context of Object Detection
- Authors: Harshit Shah, Aravindhan G, Pavan Kulkarni, Yuvaraj Govidarajulu,
Manojkumar Parmar
- Abstract summary: A significant number of machine learning models are vulnerable to model extraction attacks.
We propose an adversary black box attack extending to a regression problem for predicting bounding box coordinates in object detection.
We find that the proposed model extraction method achieves significant results by using reasonable queries.
- Score: 0.6719751155411076
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A significant number of machine learning models are vulnerable to model
extraction attacks, which focus on stealing the models by using specially
curated queries against the target model. This task is well accomplished by
using part of the training data or a surrogate dataset to train a new model
that mimics a target model in a white-box environment. In pragmatic situations,
however, the target models are trained on private datasets that are
inaccessible to the adversary. The data-free model extraction technique
replaces this problem when it comes to using queries artificially curated by a
generator similar to that used in Generative Adversarial Nets. We propose for
the first time, to the best of our knowledge, an adversary black box attack
extending to a regression problem for predicting bounding box coordinates in
object detection. As part of our study, we found that defining a loss function
and using a novel generator setup is one of the key aspects in extracting the
target model. We find that the proposed model extraction method achieves
significant results by using reasonable queries. The discovery of this object
detection vulnerability will support future prospects for securing such models.
Related papers
- MisGUIDE : Defense Against Data-Free Deep Learning Model Extraction [0.8437187555622164]
"MisGUIDE" is a two-step defense framework for Deep Learning models that disrupts the adversarial sample generation process.
The aim of the proposed defense method is to reduce the accuracy of the cloned model while maintaining accuracy on authentic queries.
arXiv Detail & Related papers (2024-03-27T13:59:21Z) - MEAOD: Model Extraction Attack against Object Detectors [45.817537875368956]
Model extraction attacks allow attackers to replicate a substitute model with comparable functionality to the victim model.
We propose an effective attack method called MEAOD for object detection models.
We achieve an extraction performance of over 70% under the given condition of a 10k query budget.
arXiv Detail & Related papers (2023-12-22T13:28:50Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - Beyond Labeling Oracles: What does it mean to steal ML models? [52.63413852460003]
Model extraction attacks are designed to steal trained models with only query access.
We investigate factors influencing the success of model extraction attacks.
Our findings urge the community to redefine the adversarial goals of ME attacks.
arXiv Detail & Related papers (2023-10-03T11:10:21Z) - Dual Student Networks for Data-Free Model Stealing [79.67498803845059]
Two main challenges are estimating gradients of the target model without access to its parameters, and generating a diverse set of training samples.
We propose a Dual Student method where two students are symmetrically trained in order to provide the generator a criterion to generate samples that the two students disagree on.
We show that our new optimization framework provides more accurate gradient estimation of the target model and better accuracies on benchmark classification datasets.
arXiv Detail & Related papers (2023-09-18T18:11:31Z) - DREAM: Domain-free Reverse Engineering Attributes of Black-box Model [51.37041886352823]
We propose a new problem of Domain-agnostic Reverse Engineering the Attributes of a black-box target model.
We learn a domain-agnostic model to infer the attributes of a target black-box model with unknown training data.
arXiv Detail & Related papers (2023-07-20T16:25:58Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - MEGA: Model Stealing via Collaborative Generator-Substitute Networks [4.065949099860426]
Recent data-free model stealingmethods are shown effective to extract the knowledge of thetarget model without using real query examples.
We propose a data-free model stealing frame-work,MEGA, which is based on collaborative generator-substitute networks.
Our results show that theaccuracy of our trained substitute model and the adversarialattack success rate over it can be up to 33% and 40% higherthan state-of-the-art data-free black-box attacks.
arXiv Detail & Related papers (2022-01-31T09:34:28Z) - Model Extraction and Defenses on Generative Adversarial Networks [0.9442139459221782]
We study the feasibility of model extraction attacks against generative adversarial networks (GANs)
We propose effective defense techniques to safeguard GANs, considering a trade-off between the utility and security of GAN models.
arXiv Detail & Related papers (2021-01-06T14:36:21Z) - Membership Inference Attacks Against Object Detection Models [1.0467092641687232]
We present the first membership inference attack against black-boxed object detection models.
We successfully reveal the membership status of privately sensitive data trained using one-stage and two-stage detection models.
Our results show that object detection models are also vulnerable to inference attacks like other models.
arXiv Detail & Related papers (2020-01-12T23:17:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.