SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
- URL: http://arxiv.org/abs/2310.12665v1
- Date: Thu, 19 Oct 2023 11:49:22 GMT
- Title: SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
- Authors: Boyang Zhang, Zheng Li, Ziqing Yang, Xinlei He, Michael Backes, Mario
Fritz, Yang Zhang
- Abstract summary: We analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on public models.
Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models.
- Score: 74.58014281829946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While advanced machine learning (ML) models are deployed in numerous
real-world applications, previous works demonstrate these models have security
and privacy vulnerabilities. Various empirical research has been done in this
field. However, most of the experiments are performed on target ML models
trained by the security researchers themselves. Due to the high computational
resource requirement for training advanced models with complex architectures,
researchers generally choose to train a few target models using relatively
simple architectures on typical experiment datasets. We argue that to
understand ML models' vulnerabilities comprehensively, experiments should be
performed on a large set of models trained with various purposes (not just the
purpose of evaluating ML attacks and defenses). To this end, we propose using
publicly available models with weights from the Internet (public models) for
evaluating attacks and defenses on ML models. We establish a database, namely
SecurityNet, containing 910 annotated image classification models. We then
analyze the effectiveness of several representative attacks/defenses, including
model stealing attacks, membership inference attacks, and backdoor detection on
these public models. Our evaluation empirically shows the performance of these
attacks/defenses can vary significantly on public models compared to
self-trained models. We share SecurityNet with the research community. and
advocate researchers to perform experiments on public models to better
demonstrate their proposed methods' effectiveness in the future.
Related papers
- Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - MEAOD: Model Extraction Attack against Object Detectors [45.817537875368956]
Model extraction attacks allow attackers to replicate a substitute model with comparable functionality to the victim model.
We propose an effective attack method called MEAOD for object detection models.
We achieve an extraction performance of over 70% under the given condition of a 10k query budget.
arXiv Detail & Related papers (2023-12-22T13:28:50Z) - An Empirical Study of Deep Learning Models for Vulnerability Detection [4.243592852049963]
We surveyed and reproduced 9 state-of-the-art deep learning models on 2 widely used vulnerability detection datasets.
We investigated model capabilities, training data, and model interpretation.
Our findings can help better understand model results, provide guidance on preparing training data, and improve the robustness of the models.
arXiv Detail & Related papers (2022-12-15T19:49:34Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Privacy Analysis of Deep Learning in the Wild: Membership Inference
Attacks against Transfer Learning [27.494206948563885]
We present the first systematic evaluation of membership inference attacks against transfer learning models.
Experiments on four real-world image datasets show that membership inference can achieve effective performance.
Our results shed light on the severity of membership risks stemming from machine learning models in practice.
arXiv Detail & Related papers (2020-09-10T14:14:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.