Fingerprinting Multi-exit Deep Neural Network Models via Inference Time
- URL: http://arxiv.org/abs/2110.03175v1
- Date: Thu, 7 Oct 2021 04:04:01 GMT
- Title: Fingerprinting Multi-exit Deep Neural Network Models via Inference Time
- Authors: Tian Dong and Han Qiu and Tianwei Zhang and Jiwei Li and Hewu Li and
Jialiang Lu
- Abstract summary: We propose a novel approach to fingerprint multi-exit models via inference time rather than inference predictions.
Specifically, we design an effective method to generate a set of fingerprint samples to craft the inference process with a unique and robust inference time cost.
- Score: 18.12409619358209
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transforming large deep neural network (DNN) models into the multi-exit
architectures can overcome the overthinking issue and distribute a large DNN
model on resource-constrained scenarios (e.g. IoT frontend devices and backend
servers) for inference and transmission efficiency. Nevertheless, intellectual
property (IP) protection for the multi-exit models in the wild is still an
unsolved challenge. Previous efforts to verify DNN model ownership mainly rely
on querying the model with specific samples and checking the responses, e.g.,
DNN watermarking and fingerprinting. However, they are vulnerable to
adversarial settings such as adversarial training and are not suitable for the
IP verification for multi-exit DNN models. In this paper, we propose a novel
approach to fingerprint multi-exit models via inference time rather than
inference predictions. Specifically, we design an effective method to generate
a set of fingerprint samples to craft the inference process with a unique and
robust inference time cost as the evidence for model ownership. We conduct
extensive experiments to prove the uniqueness and robustness of our method on
three structures (ResNet-56, VGG-16, and MobileNet) and three datasets
(CIFAR-10, CIFAR-100, and Tiny-ImageNet) under comprehensive adversarial
settings.
Related papers
- Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - NaturalFinger: Generating Natural Fingerprint with Generative
Adversarial Networks [4.536351805614037]
We propose NaturalFinger which generates natural fingerprint with generative adversarial networks (GANs)
Our approach achieves 0.91 ARUC value on the FingerBench dataset (154 models), exceeding the optimal baseline (MetaV) over 17%.
arXiv Detail & Related papers (2023-05-29T03:17:03Z) - Improving Robustness Against Adversarial Attacks with Deeply Quantized
Neural Networks [0.5849513679510833]
A disadvantage of Deep Neural Networks (DNNs) is their vulnerability to adversarial attacks, as they can be fooled by adding slight perturbations to the inputs.
This paper reports the results of devising a tiny DNN model, robust to adversarial black and white box attacks, trained with an automatic quantizationaware training framework.
arXiv Detail & Related papers (2023-04-25T13:56:35Z) - Robust and Lossless Fingerprinting of Deep Neural Networks via Pooled
Membership Inference [17.881686153284267]
Deep neural networks (DNNs) have already achieved great success in a lot of application areas and brought profound changes to our society.
How to protect the intellectual property (IP) of DNNs against infringement is one of the most important yet very challenging topics.
This paper proposes a novel technique called emphpooled membership inference (PMI) so as to protect the IP of the DNN models.
arXiv Detail & Related papers (2022-09-09T04:06:29Z) - RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems [41.1371349978643]
We present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy.
We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data.
arXiv Detail & Related papers (2022-08-27T14:50:00Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Dynamic Sparsity Neural Networks for Automatic Speech Recognition [44.352231175123215]
We present Dynamic Sparsity Neural Networks (DSNN) that, once trained, can instantly switch to any predefined sparsity configuration at run-time.
Our trained DSNN model, therefore, can greatly ease the training process and simplify deployment in diverse scenarios with resource constraints.
arXiv Detail & Related papers (2020-05-16T22:08:54Z) - Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by
Enabling Input-Adaptive Inference [119.19779637025444]
Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images)
This paper studies multi-exit networks associated with input-adaptive inference, showing their strong promise in achieving a "sweet point" in cooptimizing model accuracy, robustness and efficiency.
arXiv Detail & Related papers (2020-02-24T00:40:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.