DeepPeep: Exploiting Design Ramifications to Decipher the Architecture
of Compact DNNs
- URL: http://arxiv.org/abs/2007.15248v1
- Date: Thu, 30 Jul 2020 06:01:41 GMT
- Title: DeepPeep: Exploiting Design Ramifications to Decipher the Architecture
of Compact DNNs
- Authors: Nandan Kumar Jha, Sparsh Mittal, Binod Kumar, and Govardhan Mattela
- Abstract summary: "DeepPeep" is a two-stage attack methodology to reverse-engineer the architecture of building blocks in compact DNNs.
"Secure MobileNet-V1" provides a significant reduction in inference latency and improvement in predictive performance.
- Score: 2.3651168422805027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The remarkable predictive performance of deep neural networks (DNNs) has led
to their adoption in service domains of unprecedented scale and scope. However,
the widespread adoption and growing commercialization of DNNs have underscored
the importance of intellectual property (IP) protection. Devising techniques to
ensure IP protection has become necessary due to the increasing trend of
outsourcing the DNN computations on the untrusted accelerators in cloud-based
services. The design methodologies and hyper-parameters of DNNs are crucial
information, and leaking them may cause massive economic loss to the
organization. Furthermore, the knowledge of DNN's architecture can increase the
success probability of an adversarial attack where an adversary perturbs the
inputs and alter the prediction.
In this work, we devise a two-stage attack methodology "DeepPeep" which
exploits the distinctive characteristics of design methodologies to
reverse-engineer the architecture of building blocks in compact DNNs. We show
the efficacy of "DeepPeep" on P100 and P4000 GPUs. Additionally, we propose
intelligent design maneuvering strategies for thwarting IP theft through the
DeepPeep attack and proposed "Secure MobileNet-V1". Interestingly, compared to
vanilla MobileNet-V1, secure MobileNet-V1 provides a significant reduction in
inference latency ($\approx$60%) and improvement in predictive performance
($\approx$2%) with very-low memory and computation overheads.
Related papers
- Enhanced Convolution Neural Network with Optimized Pooling and Hyperparameter Tuning for Network Intrusion Detection [0.0]
We propose an Enhanced Convolutional Neural Network (EnCNN) for Network Intrusion Detection Systems (NIDS)
We compare EnCNN with various machine learning algorithms, including Logistic Regression, Decision Trees, Support Vector Machines (SVM), and ensemble methods like Random Forest, AdaBoost, and Voting Ensemble.
The results show that EnCNN significantly improves detection accuracy, with a notable 10% increase over state-of-art approaches.
arXiv Detail & Related papers (2024-09-27T11:20:20Z) - Older and Wiser: The Marriage of Device Aging and Intellectual Property Protection of Deep Neural Networks [10.686965180113118]
Deep neural networks (DNNs) are often kept secret due to high training costs and privacy concerns.
We propose a novel hardware-software co-design approach for DNN intellectual property (IP) protection.
Hardware-wise, we employ random aging to produce authorized chips.
Software-wise, we propose a novel DOFT, which allows pre-trained DNNs to maintain their original accuracy on authorized chips.
arXiv Detail & Related papers (2024-06-21T04:49:17Z) - Watermarking Neuromorphic Brains: Intellectual Property Protection in Spiking Neural Networks [3.4673556247932225]
spiking neural networks (SNNs) gain traction in deploying neuromorphic computing solutions.
Without adequate safeguards, proprietary SNN architectures are at risk of theft, replication, or misuse.
We pioneer an investigation into adapting two prominent watermarking approaches, namely, fingerprint-based and backdoor-based mechanisms.
arXiv Detail & Related papers (2024-05-07T06:42:30Z) - DNNShield: Embedding Identifiers for Deep Neural Network Ownership Verification [46.47446944218544]
This paper introduces DNNShield, a novel approach for protection of Deep Neural Networks (DNNs)
DNNShield embeds unique identifiers within the model architecture using specialized protection layers.
We validate the effectiveness and efficiency of DNNShield through extensive evaluations across three datasets and four model architectures.
arXiv Detail & Related papers (2024-03-11T10:27:36Z) - Deep Intellectual Property Protection: A Survey [70.98782484559408]
Deep Neural Networks (DNNs) have made revolutionary progress in recent years, and are widely used in various fields.
The goal of this paper is to provide a comprehensive survey of two mainstream DNN IP protection methods: deep watermarking and deep fingerprinting.
arXiv Detail & Related papers (2023-04-28T03:34:43Z) - RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems [41.1371349978643]
We present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy.
We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data.
arXiv Detail & Related papers (2022-08-27T14:50:00Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - DeepHammer: Depleting the Intelligence of Deep Neural Networks through
Targeted Chain of Bit Flips [29.34622626909906]
We demonstrate the first hardware-based attack on quantized deep neural networks (DNNs)
DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes.
Our work highlights the need to incorporate security mechanisms in future deep learning system.
arXiv Detail & Related papers (2020-03-30T18:51:59Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.