Dataset Inference for Self-Supervised Models
- URL: http://arxiv.org/abs/2209.09024v1
- Date: Fri, 16 Sep 2022 15:39:06 GMT
- Title: Dataset Inference for Self-Supervised Models
- Authors: Adam Dziedzic, Haonan Duan, Muhammad Ahmad Kaleem, Nikita Dhawan,
Jonas Guan, Yannis Cattan, Franziska Boenisch, Nicolas Papernot
- Abstract summary: Self-supervised models are increasingly prevalent in machine learning (ML)
They are vulnerable to model stealing attacks due to the high dimensionality of vector representations they output.
We introduce a new dataset inference defense, which uses the private training set of the victim encoder model to attribute its ownership in the event of stealing.
- Score: 21.119579812529395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised models are increasingly prevalent in machine learning (ML)
since they reduce the need for expensively labeled data. Because of their
versatility in downstream applications, they are increasingly used as a service
exposed via public APIs. At the same time, these encoder models are
particularly vulnerable to model stealing attacks due to the high
dimensionality of vector representations they output. Yet, encoders remain
undefended: existing mitigation strategies for stealing attacks focus on
supervised learning. We introduce a new dataset inference defense, which uses
the private training set of the victim encoder model to attribute its ownership
in the event of stealing. The intuition is that the log-likelihood of an
encoder's output representations is higher on the victim's training data than
on test data if it is stolen from the victim, but not if it is independently
trained. We compute this log-likelihood using density estimation models. As
part of our evaluation, we also propose measuring the fidelity of stolen
encoders and quantifying the effectiveness of the theft detection without
involving downstream tasks; instead, we leverage mutual information and
distance measurements. Our extensive empirical results in the vision domain
demonstrate that dataset inference is a promising direction for defending
self-supervised models against model stealing.
Related papers
- Beyond Labeling Oracles: What does it mean to steal ML models? [52.63413852460003]
Model extraction attacks are designed to steal trained models with only query access.
We investigate factors influencing the success of model extraction attacks.
Our findings urge the community to redefine the adversarial goals of ME attacks.
arXiv Detail & Related papers (2023-10-03T11:10:21Z) - Isolation and Induction: Training Robust Deep Neural Networks against
Model Stealing Attacks [51.51023951695014]
Existing model stealing defenses add deceptive perturbations to the victim's posterior probabilities to mislead the attackers.
This paper proposes Isolation and Induction (InI), a novel and effective training framework for model stealing defenses.
In contrast to adding perturbations over model predictions that harm the benign accuracy, we train models to produce uninformative outputs against stealing queries.
arXiv Detail & Related papers (2023-08-02T05:54:01Z) - On the Robustness of Dataset Inference [21.321310557323383]
Machine learning (ML) models are costly to train as they can require a significant amount of data, computational resources and technical expertise.
Ownership verification techniques allow the victims of model stealing attacks to demonstrate that a suspect model was in fact stolen from theirs.
A fingerprinting technique, dataset inference (DI), has been shown to offer better robustness and efficiency than prior methods.
arXiv Detail & Related papers (2022-10-24T22:17:55Z) - Are You Stealing My Model? Sample Correlation for Fingerprinting Deep
Neural Networks [86.55317144826179]
Previous methods always leverage the transferable adversarial examples as the model fingerprint.
We propose a novel yet simple model stealing detection method based on SAmple Correlation (SAC)
SAC successfully defends against various model stealing attacks, even including adversarial training or transfer learning.
arXiv Detail & Related papers (2022-10-21T02:07:50Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained
Encoders [9.070481370120905]
We propose SSLGuard, the first watermarking algorithm for pre-trained encoders.
SSLGuard is effective in watermark injection and verification, and is robust against model stealing and other watermark removal attacks.
arXiv Detail & Related papers (2022-01-27T17:41:54Z) - Defending against Model Stealing via Verifying Embedded External
Features [90.29429679125508]
adversaries can steal' deployed models even when they have no training samples and can not get access to the model parameters or structures.
We explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified emphexternal features.
Our method is effective in detecting different types of model stealing simultaneously, even if the stolen model is obtained via a multi-stage stealing process.
arXiv Detail & Related papers (2021-12-07T03:51:54Z) - Dataset Inference: Ownership Resolution in Machine Learning [18.248121977353506]
knowledge contained in stolen model's training set is what is common to all stolen copies.
We introduce $dataset$ $inference, the process of identifying whether a suspected model copy has private knowledge from the original model's dataset.
Experiments on CIFAR10, SVHN, CIFAR100 and ImageNet show that model owners can claim with confidence greater than 99% that their model (or dataset as a matter of fact) was stolen.
arXiv Detail & Related papers (2021-04-21T18:12:18Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.