TBNet: A Neural Architectural Defense Framework Facilitating DNN Model Protection in Trusted Execution Environments
- URL: http://arxiv.org/abs/2405.03974v1
- Date: Tue, 7 May 2024 03:08:30 GMT
- Title: TBNet: A Neural Architectural Defense Framework Facilitating DNN Model Protection in Trusted Execution Environments
- Authors: Ziyu Liu, Tong Zhou, Yukui Luo, Xiaolin Xu,
- Abstract summary: This paper presents TBNet, a TEE-based defense framework that protects DNN model from a neural architectural perspective.
Experimental results on a Raspberry Pi across diverse DNN model architectures and datasets demonstrate that TBNet achieves efficient model protection at a low cost.
- Score: 14.074570784425225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trusted Execution Environments (TEEs) have become a promising solution to secure DNN models on edge devices. However, the existing solutions either provide inadequate protection or introduce large performance overhead. Taking both security and performance into consideration, this paper presents TBNet, a TEE-based defense framework that protects DNN model from a neural architectural perspective. Specifically, TBNet generates a novel Two-Branch substitution model, to respectively exploit (1) the computational resources in the untrusted Rich Execution Environment (REE) for latency reduction and (2) the physically-isolated TEE for model protection. Experimental results on a Raspberry Pi across diverse DNN model architectures and datasets demonstrate that TBNet achieves efficient model protection at a low cost.
Related papers
- Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness [47.9744734181236]
We explore the concept of Lipschitz continuity to certify the robustness of deep neural networks (DNNs) against adversarial attacks.
We propose a novel algorithm that remaps the input domain into a constrained range, reducing the Lipschitz constant and potentially enhancing robustness.
Our method achieves the best robust accuracy for CIFAR10, CIFAR100, and ImageNet datasets on the RobustBench leaderboard.
arXiv Detail & Related papers (2024-06-28T03:10:36Z) - Proteus: Preserving Model Confidentiality during Graph Optimizations [5.44122875495684]
This paper presents Proteus, a novel mechanism that enables model optimization by an independent party.
Proteus obfuscates the protected model by partitioning its computational graph into subgraphs.
To our knowledge, Proteus is the first work that tackles the challenge of model confidentiality during performance optimization.
arXiv Detail & Related papers (2024-04-18T21:23:25Z) - DNNShield: Embedding Identifiers for Deep Neural Network Ownership Verification [46.47446944218544]
This paper introduces DNNShield, a novel approach for protection of Deep Neural Networks (DNNs)
DNNShield embeds unique identifiers within the model architecture using specialized protection layers.
We validate the effectiveness and efficiency of DNNShield through extensive evaluations across three datasets and four model architectures.
arXiv Detail & Related papers (2024-03-11T10:27:36Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - MirrorNet: A TEE-Friendly Framework for Secure On-device DNN Inference [14.08010398777227]
Deep neural network (DNN) models have become prevalent in edge devices for real-time inference.
Existing defense approaches fail to fully safeguard model confidentiality or result in significant latency issues.
This paper presents MirrorNet, which generates a TEE-friendly implementation for any given DNN model to protect the model confidentiality.
For the evaluation, MirrorNet can achieve a 18.6% accuracy gap between authenticated and illegal use, while only introducing 0.99% hardware overhead.
arXiv Detail & Related papers (2023-11-16T01:21:19Z) - No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN
Partition for On-Device ML [28.392497220631032]
We show that existing TSDP solutions are vulnerable to privacy-stealing attacks and are not as safe as commonly believed.
We present TEESlice, a novel TSDP method that defends against MS and MIA during DNN inference.
arXiv Detail & Related papers (2023-10-11T02:54:52Z) - XploreNAS: Explore Adversarially Robust & Hardware-efficient Neural
Architectures for Non-ideal Xbars [2.222917681321253]
This work proposes a two-phase algorithm-hardware co-optimization approach called XploreNAS.
It searches for hardware-efficient & adversarially robust neural architectures for non-ideal crossbar platforms.
Experiments on crossbars with benchmark datasets show upto 8-16% improvement in the adversarial robustness of the searched Subnets.
arXiv Detail & Related papers (2023-02-15T16:44:18Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - ShadowNet: A Secure and Efficient On-device Model Inference System for
Convolutional Neural Networks [33.98338559074557]
ShadowNet is a novel on-device model inference system.
It protects the model privacy while securely outsourcing the heavy linear layers of the model to the untrusted hardware accelerators.
Our evaluation shows that ShadowNet achieves strong security guarantees with reasonable performance.
arXiv Detail & Related papers (2020-11-11T16:50:08Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.