A Novel Membership Inference Attack against Dynamic Neural Networks by
Utilizing Policy Networks Information
- URL: http://arxiv.org/abs/2210.08956v1
- Date: Mon, 17 Oct 2022 11:51:02 GMT
- Title: A Novel Membership Inference Attack against Dynamic Neural Networks by
Utilizing Policy Networks Information
- Authors: Pan Li, Peizhuo Lv, Shenchen Zhu, Ruigang Liang, Kai Chen,
- Abstract summary: We propose a novel MI attack against dynamic NNs, leveraging the unique policy networks mechanism of dynamic NNs.
Based on backbone-finetuning and information-fusion, our method achieves better results than baseline attack and traditional attack.
- Score: 11.807178385292296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unlike traditional static deep neural networks (DNNs), dynamic neural
networks (NNs) adjust their structures or parameters to different inputs to
guarantee accuracy and computational efficiency. Meanwhile, it has been an
emerging research area in deep learning recently. Although traditional static
DNNs are vulnerable to the membership inference attack (MIA) , which aims to
infer whether a particular point was used to train the model, little is known
about how such an attack performs on the dynamic NNs. In this paper, we propose
a novel MI attack against dynamic NNs, leveraging the unique policy networks
mechanism of dynamic NNs to increase the effectiveness of membership inference.
We conducted extensive experiments using two dynamic NNs, i.e., GaterNet,
BlockDrop, on four mainstream image classification tasks, i.e., CIFAR-10,
CIFAR-100, STL-10, and GTSRB. The evaluation results demonstrate that the
control-flow information can significantly promote the MIA. Based on
backbone-finetuning and information-fusion, our method achieves better results
than baseline attack and traditional attack using intermediate information.
Related papers
- Late Breaking Results: Fortifying Neural Networks: Safeguarding Against Adversarial Attacks with Stochastic Computing [1.523100574874007]
In neural network (NN) security, safeguarding model integrity and resilience against adversarial attacks has become paramount.
This study investigates the application of computing (SC) as a novel mechanism to fortify NN models.
arXiv Detail & Related papers (2024-07-05T20:49:32Z) - Problem space structural adversarial attacks for Network Intrusion Detection Systems based on Graph Neural Networks [8.629862888374243]
We propose the first formalization of adversarial attacks specifically tailored for GNN in network intrusion detection.
We outline and model the problem space constraints that attackers need to consider to carry out feasible structural attacks in real-world scenarios.
Our findings demonstrate the increased robustness of the models against classical feature-based adversarial attacks.
arXiv Detail & Related papers (2024-03-18T14:40:33Z) - Dynamic Neural Network is All You Need: Understanding the Robustness of
Dynamic Mechanisms in Neural Networks [10.225238909616104]
We investigate the robustness of dynamic mechanism in DyNNs and how dynamic mechanism design impacts the robustness of DyNNs.
We find that attack transferability from DyNNs to SDNNs is higher than attack transferability from SDNNs to DyNNs.
Also, we find that DyNNs can be used to generate adversarial samples more efficiently than SDNNs.
arXiv Detail & Related papers (2023-08-17T00:15:11Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Interference Cancellation GAN Framework for Dynamic Channels [74.22393885274728]
We introduce an online training framework that can adapt to any changes in the channel.
Our framework significantly outperforms recent neural network models on highly dynamic channels.
arXiv Detail & Related papers (2022-08-17T02:01:18Z) - HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep
Spiking Neural Networks by Training with Crafted Input Noise [13.904091056365765]
We present an SNN training algorithm that uses crafted input noise and incurs no additional training time.
Compared to standard trained direct input SNNs, our trained models yield improved classification accuracy of up to 13.7%.
Our models also outperform inherently robust SNNs trained on rate-coded inputs with improved or similar classification performance on attack-generated images.
arXiv Detail & Related papers (2021-10-06T16:48:48Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Evaluation of Adversarial Training on Different Types of Neural Networks
in Deep Learning-based IDSs [3.8073142980733]
We focus on investigating the effectiveness of different evasion attacks and how to train a resilience deep learning-based IDS.
We use the min-max approach to formulate the problem of training robust IDS against adversarial examples.
Our experiments on different deep learning algorithms and different benchmark datasets demonstrate that defense using an adversarial training-based min-max approach improves the robustness against the five well-known adversarial attack methods.
arXiv Detail & Related papers (2020-07-08T23:33:30Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.