EZClone: Improving DNN Model Extraction Attack via Shape Distillation
from GPU Execution Profiles
- URL: http://arxiv.org/abs/2304.03388v1
- Date: Thu, 6 Apr 2023 21:40:09 GMT
- Title: EZClone: Improving DNN Model Extraction Attack via Shape Distillation
from GPU Execution Profiles
- Authors: Jonah O'Brien Weiss, Tiago Alves, Sandip Kundu
- Abstract summary: Deep Neural Networks (DNNs) have become ubiquitous due to their performance on prediction and classification problems.
They face a variety of threats as their usage spreads.
Model extraction attacks, which steal DNNs, endanger intellectual property, data privacy, and security.
We propose two techniques catering to various threat models.
- Score: 0.1529342790344802
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks (DNNs) have become ubiquitous due to their performance
on prediction and classification problems. However, they face a variety of
threats as their usage spreads. Model extraction attacks, which steal DNNs,
endanger intellectual property, data privacy, and security. Previous research
has shown that system-level side-channels can be used to leak the architecture
of a victim DNN, exacerbating these risks. We propose two DNN architecture
extraction techniques catering to various threat models. The first technique
uses a malicious, dynamically linked version of PyTorch to expose a victim DNN
architecture through the PyTorch profiler. The second, called EZClone, exploits
aggregate (rather than time-series) GPU profiles as a side-channel to predict
DNN architecture, employing a simple approach and assuming little adversary
capability as compared to previous work. We investigate the effectiveness of
EZClone when minimizing the complexity of the attack, when applied to pruned
models, and when applied across GPUs. We find that EZClone correctly predicts
DNN architectures for the entire set of PyTorch vision architectures with 100%
accuracy. No other work has shown this degree of architecture prediction
accuracy with the same adversarial constraints or using aggregate side-channel
information. Prior work has shown that, once a DNN has been successfully
cloned, further attacks such as model evasion or model inversion can be
accelerated significantly.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization [57.87950229651958]
Quantized neural networks (QNNs) have received increasing attention in resource-constrained scenarios due to their exceptional generalizability.
Previous studies claim that transferability is difficult to achieve across QNNs with different bitwidths.
We propose textitquantization aware attack (QAA) which fine-tunes a QNN substitute model with a multiple-bitwidth training objective.
arXiv Detail & Related papers (2023-05-10T03:46:53Z) - Backdoor Defense via Deconfounded Representation Learning [17.28760299048368]
We propose a Causality-inspired Backdoor Defense (CBD) to learn deconfounded representations for reliable classification.
CBD is effective in reducing backdoor threats while maintaining high accuracy in predicting benign samples.
arXiv Detail & Related papers (2023-03-13T02:25:59Z) - ObfuNAS: A Neural Architecture Search-based DNN Obfuscation Approach [25.5826067429808]
Malicious architecture extraction has been emerging as a crucial concern for deep neural network (DNN) security.
We propose ObfuNAS, which converts the DNN architecture obfuscation into a neural architecture search (NAS) problem.
We validate the performance of ObfuNAS with open-source architecture datasets like NAS-Bench-101 and NAS-Bench-301.
arXiv Detail & Related papers (2022-08-17T23:25:42Z) - Model-Contrastive Learning for Backdoor Defense [13.781375023320981]
We propose a novel backdoor defense method named MCL based on model-contrastive learning.
MCL is more effective for reducing backdoor threats while maintaining higher accuracy of benign data.
arXiv Detail & Related papers (2022-05-09T16:36:46Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
Stealing in Memories [26.067920958354]
One of the major threats to the privacy of Deep Neural Networks (DNNs) is model extraction attacks.
Recent studies show hardware-based side channel attacks can reveal internal knowledge about DNN models (e.g., model architectures)
We propose an advanced model extraction attack framework DeepSteal that effectively steals DNN weights with the aid of memory side-channel attack.
arXiv Detail & Related papers (2021-11-08T16:55:45Z) - HufuNet: Embedding the Left Piece as Watermark and Keeping the Right
Piece for Ownership Verification in Deep Neural Networks [16.388046449021466]
We propose a novel solution for watermarking deep neural networks (DNNs)
HufuNet is highly robust against model fine-tuning/pruning, kernels cutoff/supplement, functionality-equivalent attack, and fraudulent ownership claims.
arXiv Detail & Related papers (2021-03-25T06:55:22Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.