How to 0wn NAS in Your Spare Time
- URL: http://arxiv.org/abs/2002.06776v2
- Date: Thu, 25 Feb 2021 23:04:04 GMT
- Title: How to 0wn NAS in Your Spare Time
- Authors: Sanghyun Hong, Michael Davinroy, Yi\u{g}itcan Kaya, Dana
Dachman-Soled, Tudor Dumitra\c{s}
- Abstract summary: We design an algorithm that reconstructs the key components of a novel deep learning system by exploiting a small amount of information leakage from a cache side-channel attack.
We demonstrate experimentally that we can reconstruct MalConv, a novel data pre-processing pipeline for malware detection, and ProxylessNAS CPU-NAS, a novel network architecture for ImageNet classification.
- Score: 11.997555708723523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: New data processing pipelines and novel network architectures increasingly
drive the success of deep learning. In consequence, the industry considers
top-performing architectures as intellectual property and devotes considerable
computational resources to discovering such architectures through neural
architecture search (NAS). This provides an incentive for adversaries to steal
these novel architectures; when used in the cloud, to provide Machine Learning
as a Service, the adversaries also have an opportunity to reconstruct the
architectures by exploiting a range of hardware side channels. However, it is
challenging to reconstruct novel architectures and pipelines without knowing
the computational graph (e.g., the layers, branches or skip connections), the
architectural parameters (e.g., the number of filters in a convolutional layer)
or the specific pre-processing steps (e.g. embeddings). In this paper, we
design an algorithm that reconstructs the key components of a novel deep
learning system by exploiting a small amount of information leakage from a
cache side-channel attack, Flush+Reload. We use Flush+Reload to infer the trace
of computations and the timing for each computation. Our algorithm then
generates candidate computational graphs from the trace and eliminates
incompatible candidates through a parameter estimation process. We implement
our algorithm in PyTorch and Tensorflow. We demonstrate experimentally that we
can reconstruct MalConv, a novel data pre-processing pipeline for malware
detection, and ProxylessNAS- CPU, a novel network architecture for the ImageNet
classification optimized to run on CPUs, without knowing the architecture
family. In both cases, we achieve 0% error. These results suggest hardware side
channels are a practical attack vector against MLaaS, and more efforts should
be devoted to understanding their impact on the security of deep learning
systems.
Related papers
- Hardware Aware Evolutionary Neural Architecture Search using
Representation Similarity Metric [12.52012450501367]
Hardware-aware Neural Architecture Search (HW-NAS) is a technique used to automatically design the architecture of a neural network for a specific task and target hardware.
evaluating the performance of candidate architectures is a key challenge in HW-NAS, as it requires significant computational resources.
We propose an efficient hardware-aware evolution-based NAS approach called HW-EvRSNAS.
arXiv Detail & Related papers (2023-11-07T11:58:40Z) - GeNAS: Neural Architecture Search with Better Generalization [14.92869716323226]
Recent neural architecture search (NAS) approaches rely on validation loss or accuracy to find the superior network for the target data.
In this paper, we investigate a new neural architecture search measure for excavating architectures with better generalization.
arXiv Detail & Related papers (2023-05-15T12:44:54Z) - NASiam: Efficient Representation Learning using Neural Architecture
Search for Siamese Networks [76.8112416450677]
Siamese networks are one of the most trending methods to achieve self-supervised visual representation learning (SSL)
NASiam is a novel approach that uses for the first time differentiable NAS to improve the multilayer perceptron projector and predictor (encoder/predictor pair)
NASiam reaches competitive performance in both small-scale (i.e., CIFAR-10/CIFAR-100) and large-scale (i.e., ImageNet) image classification datasets while costing only a few GPU hours.
arXiv Detail & Related papers (2023-01-31T19:48:37Z) - FlowNAS: Neural Architecture Search for Optical Flow Estimation [65.44079917247369]
We propose a neural architecture search method named FlowNAS to automatically find the better encoder architecture for flow estimation task.
Experimental results show that the discovered architecture with the weights inherited from the super-network achieves 4.67% F1-all error on KITTI.
arXiv Detail & Related papers (2022-07-04T09:05:25Z) - Auto-tuning of Deep Neural Networks by Conflicting Layer Removal [0.0]
We introduce a novel methodology to identify layers that decrease the test accuracy of trained models.
Conflicting layers are detected as early as the beginning of training.
We will show that around 60% of the layers of trained residual networks can be completely removed from the architecture.
arXiv Detail & Related papers (2021-03-07T11:51:55Z) - Weak NAS Predictors Are All You Need [91.11570424233709]
Recent predictor-based NAS approaches attempt to solve the problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor.
We shift the paradigm from finding a complicated predictor that covers the whole architecture space to a set of weaker predictors that progressively move towards the high-performance sub-space.
Our method costs fewer samples to find the top-performance architectures on NAS-Bench-101 and NAS-Bench-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
arXiv Detail & Related papers (2021-02-21T01:58:43Z) - Hierarchical Neural Architecture Search for Deep Stereo Matching [131.94481111956853]
We propose the first end-to-end hierarchical NAS framework for deep stereo matching.
Our framework incorporates task-specific human knowledge into the neural architecture search framework.
It is ranked at the top 1 accuracy on KITTI stereo 2012, 2015 and Middlebury benchmarks, as well as the top 1 on SceneFlow dataset.
arXiv Detail & Related papers (2020-10-26T11:57:37Z) - Stage-Wise Neural Architecture Search [65.03109178056937]
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications.
These networks consist of stages, which are sets of layers that operate on representations in the same resolution.
It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network.
However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time.
arXiv Detail & Related papers (2020-04-23T14:16:39Z) - A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service [68.84245063902908]
This paper introduces a novel distributed architecture for deep-learning-as-a-service.
It is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services.
arXiv Detail & Related papers (2020-03-30T15:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.