ObfuNAS: A Neural Architecture Search-based DNN Obfuscation Approach
- URL: http://arxiv.org/abs/2208.08569v1
- Date: Wed, 17 Aug 2022 23:25:42 GMT
- Title: ObfuNAS: A Neural Architecture Search-based DNN Obfuscation Approach
- Authors: Tong Zhou, Shaolei Ren, Xiaolin Xu
- Abstract summary: Malicious architecture extraction has been emerging as a crucial concern for deep neural network (DNN) security.
We propose ObfuNAS, which converts the DNN architecture obfuscation into a neural architecture search (NAS) problem.
We validate the performance of ObfuNAS with open-source architecture datasets like NAS-Bench-101 and NAS-Bench-301.
- Score: 25.5826067429808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Malicious architecture extraction has been emerging as a crucial concern for
deep neural network (DNN) security. As a defense, architecture obfuscation is
proposed to remap the victim DNN to a different architecture. Nonetheless, we
observe that, with only extracting an obfuscated DNN architecture, the
adversary can still retrain a substitute model with high performance (e.g.,
accuracy), rendering the obfuscation techniques ineffective. To mitigate this
under-explored vulnerability, we propose ObfuNAS, which converts the DNN
architecture obfuscation into a neural architecture search (NAS) problem. Using
a combination of function-preserving obfuscation strategies, ObfuNAS ensures
that the obfuscated DNN architecture can only achieve lower accuracy than the
victim. We validate the performance of ObfuNAS with open-source architecture
datasets like NAS-Bench-101 and NAS-Bench-301. The experimental results
demonstrate that ObfuNAS can successfully find the optimal mask for a victim
model within a given FLOPs constraint, leading up to 2.6% inference accuracy
degradation for attackers with only 0.14x FLOPs overhead. The code is available
at: https://github.com/Tongzhou0101/ObfuNAS.
Related papers
- The devil is in discretization discrepancy. Robustifying Differentiable NAS with Single-Stage Searching Protocol [2.4300749758571905]
gradient-based methods suffer from the discretization error, which can severely damage the process of obtaining the final architecture.
We introduce a novel single-stage searching protocol, which is not reliant on decoding a continuous architecture.
Our results demonstrate that this approach outperforms other DNAS methods by achieving 75.3% in the searching stage on the Cityscapes validation dataset.
arXiv Detail & Related papers (2024-05-26T15:44:53Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - DeepTheft: Stealing DNN Model Architectures through Power Side Channel [42.380259435613354]
Deep Neural Network (DNN) models are often deployed in resource-sharing clouds as Machine Learning as a Service (ML) to provide inference services.
To steal model architectures that are of valuable intellectual properties, a class of attacks has been proposed via different side-channel leakage.
We propose a new end-to-end attack, DeepTheft, to accurately recover complex DNN model architectures on general processors via the RAPL-based power side channel.
arXiv Detail & Related papers (2023-09-21T08:58:14Z) - GeNAS: Neural Architecture Search with Better Generalization [14.92869716323226]
Recent neural architecture search (NAS) approaches rely on validation loss or accuracy to find the superior network for the target data.
In this paper, we investigate a new neural architecture search measure for excavating architectures with better generalization.
arXiv Detail & Related papers (2023-05-15T12:44:54Z) - EZClone: Improving DNN Model Extraction Attack via Shape Distillation
from GPU Execution Profiles [0.1529342790344802]
Deep Neural Networks (DNNs) have become ubiquitous due to their performance on prediction and classification problems.
They face a variety of threats as their usage spreads.
Model extraction attacks, which steal DNNs, endanger intellectual property, data privacy, and security.
We propose two techniques catering to various threat models.
arXiv Detail & Related papers (2023-04-06T21:40:09Z) - NASiam: Efficient Representation Learning using Neural Architecture
Search for Siamese Networks [76.8112416450677]
Siamese networks are one of the most trending methods to achieve self-supervised visual representation learning (SSL)
NASiam is a novel approach that uses for the first time differentiable NAS to improve the multilayer perceptron projector and predictor (encoder/predictor pair)
NASiam reaches competitive performance in both small-scale (i.e., CIFAR-10/CIFAR-100) and large-scale (i.e., ImageNet) image classification datasets while costing only a few GPU hours.
arXiv Detail & Related papers (2023-01-31T19:48:37Z) - NeuroUnlock: Unlocking the Architecture of Obfuscated Deep Neural
Networks [12.264879142584617]
We present NeuroUnlock, a novel SCAS attack against obfuscated deep neural networks (DNNs)
Our NeuroUnlock employs a sequence-to-sequence model that learns the obfuscation procedure and automatically reverts it.
We also propose a novel methodology for DNN obfuscation, ReDLock, which eradicates the deterministic nature of the obfuscation.
arXiv Detail & Related papers (2022-06-01T11:10:00Z) - NAS-FCOS: Efficient Search for Object Detection Architectures [113.47766862146389]
We propose an efficient method to obtain better object detectors by searching for the feature pyramid network (FPN) and the prediction head of a simple anchor-free object detector.
With carefully designed search space, search algorithms, and strategies for evaluating network quality, we are able to find top-performing detection architectures within 4 days using 8 V100 GPUs.
arXiv Detail & Related papers (2021-10-24T12:20:04Z) - D-DARTS: Distributed Differentiable Architecture Search [75.12821786565318]
Differentiable ARchiTecture Search (DARTS) is one of the most trending Neural Architecture Search (NAS) methods.
We propose D-DARTS, a novel solution that addresses this problem by nesting several neural networks at cell-level.
arXiv Detail & Related papers (2021-08-20T09:07:01Z) - BossNAS: Exploring Hybrid CNN-transformers with Block-wisely
Self-supervised Neural Architecture Search [100.28980854978768]
We present Block-wisely Self-supervised Neural Architecture Search (BossNAS)
We factorize the search space into blocks and utilize a novel self-supervised training scheme, named ensemble bootstrapping, to train each block separately.
We also present HyTra search space, a fabric-like hybrid CNN-transformer search space with searchable down-sampling positions.
arXiv Detail & Related papers (2021-03-23T10:05:58Z) - Weak NAS Predictors Are All You Need [91.11570424233709]
Recent predictor-based NAS approaches attempt to solve the problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor.
We shift the paradigm from finding a complicated predictor that covers the whole architecture space to a set of weaker predictors that progressively move towards the high-performance sub-space.
Our method costs fewer samples to find the top-performance architectures on NAS-Bench-101 and NAS-Bench-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
arXiv Detail & Related papers (2021-02-21T01:58:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.