Model Extraction Attacks against Recurrent Neural Networks
- URL: http://arxiv.org/abs/2002.00123v1
- Date: Sat, 1 Feb 2020 01:47:50 GMT
- Title: Model Extraction Attacks against Recurrent Neural Networks
- Authors: Tatsuya Takemura and Naoto Yanai and Toru Fujiwara
- Abstract summary: We study the threats of model extraction attacks against recurrent neural networks (RNNs)
We discuss whether a model with a higher accuracy can be extracted with a simple RNN from a long short-term memory (LSTM)
We then show that a model with a higher accuracy can be extracted efficiently, especially through configuring a loss function and a more complex architecture.
- Score: 1.2891210250935146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model extraction attacks are a kind of attacks in which an adversary obtains
a new model, whose performance is equivalent to that of a target model, via
query access to the target model efficiently, i.e., fewer datasets and
computational resources than those of the target model. Existing works have
dealt with only simple deep neural networks (DNNs), e.g., only three layers, as
targets of model extraction attacks, and hence are not aware of the
effectiveness of recurrent neural networks (RNNs) in dealing with time-series
data. In this work, we shed light on the threats of model extraction attacks
against RNNs. We discuss whether a model with a higher accuracy can be
extracted with a simple RNN from a long short-term memory (LSTM), which is a
more complicated and powerful RNN. Specifically, we tackle the following
problems. First, in a case of a classification problem, such as image
recognition, extraction of an RNN model without final outputs from an LSTM
model is presented by utilizing outputs halfway through the sequence. Next, in
a case of a regression problem. such as in weather forecasting, a new attack by
newly configuring a loss function is presented. We conduct experiments on our
model extraction attacks against an RNN and an LSTM trained with publicly
available academic datasets. We then show that a model with a higher accuracy
can be extracted efficiently, especially through configuring a loss function
and a more complex architecture different from the target model.
Related papers
- A model for multi-attack classification to improve intrusion detection
performance using deep learning approaches [0.0]
The objective here is to create a reliable intrusion detection mechanism to help identify malicious attacks.
Deep learning based solution framework is developed consisting of three approaches.
The first approach is Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) with seven functions such as adamax, SGD, adagrad, adam, RMSprop, nadam and adadelta.
The models self-learnt the features and classifies the attack classes as multi-attack classification.
arXiv Detail & Related papers (2023-10-25T05:38:44Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - TSFool: Crafting Highly-Imperceptible Adversarial Time Series through Multi-Objective Attack [6.243453526766042]
We propose an efficient method called TSFool to craft highly-imperceptible adversarial time series for RNN-based TSC.
The core idea is a new global optimization objective known as "Camouflage Coefficient" that captures the imperceptibility of adversarial samples from the class distribution.
Experiments on 11 UCR and UEA datasets showcase that TSFool significantly outperforms six white-box and three black-box benchmark attacks.
arXiv Detail & Related papers (2022-09-14T03:02:22Z) - Adversarial Robustness Assessment of NeuroEvolution Approaches [1.237556184089774]
We evaluate the robustness of models found by two NeuroEvolution approaches on the CIFAR-10 image classification task.
Our results show that when the evolved models are attacked with iterative methods, their accuracy usually drops to, or close to, zero.
Some of these techniques can exacerbate the perturbations added to the original inputs, potentially harming robustness.
arXiv Detail & Related papers (2022-07-12T10:40:19Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
Stealing in Memories [26.067920958354]
One of the major threats to the privacy of Deep Neural Networks (DNNs) is model extraction attacks.
Recent studies show hardware-based side channel attacks can reveal internal knowledge about DNN models (e.g., model architectures)
We propose an advanced model extraction attack framework DeepSteal that effectively steals DNN weights with the aid of memory side-channel attack.
arXiv Detail & Related papers (2021-11-08T16:55:45Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Auditory Attention Decoding from EEG using Convolutional Recurrent
Neural Network [20.37214453938965]
The auditory attention decoding (AAD) approach was proposed to determine the identity of the attended talker in a multi-talker scenario.
Recent models based on deep neural networks (DNN) have been proposed to solve this problem.
In this paper, we proposed novel convolutional recurrent neural network (CRNN) based regression model and classification model.
arXiv Detail & Related papers (2021-03-03T05:09:40Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.