Are You Stealing My Model? Sample Correlation for Fingerprinting Deep
Neural Networks
- URL: http://arxiv.org/abs/2210.15427v1
- Date: Fri, 21 Oct 2022 02:07:50 GMT
- Title: Are You Stealing My Model? Sample Correlation for Fingerprinting Deep
Neural Networks
- Authors: Jiyang Guan, Jian Liang, Ran He
- Abstract summary: Previous methods always leverage the transferable adversarial examples as the model fingerprint.
We propose a novel yet simple model stealing detection method based on SAmple Correlation (SAC)
SAC successfully defends against various model stealing attacks, even including adversarial training or transfer learning.
- Score: 86.55317144826179
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An off-the-shelf model as a commercial service could be stolen by model
stealing attacks, posing great threats to the rights of the model owner. Model
fingerprinting aims to verify whether a suspect model is stolen from the victim
model, which gains more and more attention nowadays. Previous methods always
leverage the transferable adversarial examples as the model fingerprint, which
is sensitive to adversarial defense or transfer learning scenarios. To address
this issue, we consider the pairwise relationship between samples instead and
propose a novel yet simple model stealing detection method based on SAmple
Correlation (SAC). Specifically, we present SAC-w that selects wrongly
classified normal samples as model inputs and calculates the mean correlation
among their model outputs. To reduce the training time, we further develop
SAC-m that selects CutMix Augmented samples as model inputs, without the need
for training the surrogate models or generating adversarial examples. Extensive
results validate that SAC successfully defends against various model stealing
attacks, even including adversarial training or transfer learning, and detects
the stolen models with the best performance in terms of AUC across different
datasets and model architectures. The codes are available at
https://github.com/guanjiyang/SAC.
Related papers
- Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based
sample selection [10.513955887214497]
In Model Stealing Attacks (MSA), a machine learning model is queried repeatedly to build a labelled dataset.
In this work, we explore the usage of an ensemble of deep learning models as our thief model.
We achieve a 21% higher adversarial sample transferability than previous work for models trained on the CIFAR-10 dataset.
arXiv Detail & Related papers (2023-11-08T10:31:29Z) - Beyond Labeling Oracles: What does it mean to steal ML models? [52.63413852460003]
Model extraction attacks are designed to steal trained models with only query access.
We investigate factors influencing the success of model extraction attacks.
Our findings urge the community to redefine the adversarial goals of ME attacks.
arXiv Detail & Related papers (2023-10-03T11:10:21Z) - Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared
Adversarial Examples [67.66153875643964]
Backdoor attacks are serious security threats to machine learning models.
In this paper, we explore the task of purifying a backdoored model using a small clean dataset.
By establishing the connection between backdoor risk and adversarial risk, we derive a novel upper bound for backdoor risk.
arXiv Detail & Related papers (2023-07-20T03:56:04Z) - Model Extraction Attack against Self-supervised Speech Models [52.81330435990717]
Self-supervised learning (SSL) speech models generate meaningful representations of given clips.
Model extraction attack (MEA) often refers to an adversary stealing the functionality of the victim model with only query access.
We study the MEA problem against SSL speech model with a small number of queries.
arXiv Detail & Related papers (2022-11-29T09:28:05Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - MEGA: Model Stealing via Collaborative Generator-Substitute Networks [4.065949099860426]
Recent data-free model stealingmethods are shown effective to extract the knowledge of thetarget model without using real query examples.
We propose a data-free model stealing frame-work,MEGA, which is based on collaborative generator-substitute networks.
Our results show that theaccuracy of our trained substitute model and the adversarialattack success rate over it can be up to 33% and 40% higherthan state-of-the-art data-free black-box attacks.
arXiv Detail & Related papers (2022-01-31T09:34:28Z) - Defending against Model Stealing via Verifying Embedded External
Features [90.29429679125508]
adversaries can steal' deployed models even when they have no training samples and can not get access to the model parameters or structures.
We explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified emphexternal features.
Our method is effective in detecting different types of model stealing simultaneously, even if the stolen model is obtained via a multi-stage stealing process.
arXiv Detail & Related papers (2021-12-07T03:51:54Z) - Dataset Inference: Ownership Resolution in Machine Learning [18.248121977353506]
knowledge contained in stolen model's training set is what is common to all stolen copies.
We introduce $dataset$ $inference, the process of identifying whether a suspected model copy has private knowledge from the original model's dataset.
Experiments on CIFAR10, SVHN, CIFAR100 and ImageNet show that model owners can claim with confidence greater than 99% that their model (or dataset as a matter of fact) was stolen.
arXiv Detail & Related papers (2021-04-21T18:12:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.