Dynamics-aware Adversarial Attack of Adaptive Neural Networks
- URL: http://arxiv.org/abs/2210.08159v4
- Date: Thu, 11 Jan 2024 03:33:21 GMT
- Title: Dynamics-aware Adversarial Attack of Adaptive Neural Networks
- Authors: An Tao and Yueqi Duan and Yingqi Wang and Jiwen Lu and Jie Zhou
- Abstract summary: We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
- Score: 75.50214601278455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate the dynamics-aware adversarial attack problem
of adaptive neural networks. Most existing adversarial attack algorithms are
designed under a basic assumption -- the network architecture is fixed
throughout the attack process. However, this assumption does not hold for many
recently proposed adaptive neural networks, which adaptively deactivate
unnecessary execution units based on inputs to improve computational
efficiency. It results in a serious issue of lagged gradient, making the
learned attack at the current step ineffective due to the architecture change
afterward. To address this issue, we propose a Leaded Gradient Method (LGM) and
show the significant effects of the lagged gradient. More specifically, we
reformulate the gradients to be aware of the potential dynamic changes of
network architectures, so that the learned attack better "leads" the next step
than the dynamics-unaware methods when network architecture changes
dynamically. Extensive experiments on representative types of adaptive neural
networks for both 2D images and 3D point clouds show that our LGM achieves
impressive adversarial attack performance compared with the dynamic-unaware
attack methods. Code is available at https://github.com/antao97/LGM.
Related papers
- Are GATs Out of Balance? [73.2500577189791]
We study the Graph Attention Network (GAT) in which a node's neighborhood aggregation is weighted by parameterized attention coefficients.
Our main theorem serves as a stepping stone to studying the learning dynamics of positive homogeneous models with attention mechanisms.
arXiv Detail & Related papers (2023-10-11T06:53:05Z) - Neuroevolution of Recurrent Architectures on Control Tasks [3.04585143845864]
We implement a massively parallel evolutionary algorithm and run experiments on all 19 OpenAI Gym state-based reinforcement learning control tasks.
We find that dynamic agents match or exceed the performance of gradient-based agents while utilizing orders of magnitude fewer parameters.
arXiv Detail & Related papers (2023-04-03T16:29:18Z) - GradMDM: Adversarial Attack on Dynamic Networks [10.948810070861525]
We attack dynamic models with our novel algorithm GradMDM.
GradMDM adjusts the direction and the magnitude of the gradients to effectively find a small perturbation for each input.
We evaluate GradMDM on multiple datasets and dynamic models, where it outperforms previous energy-oriented attack techniques.
arXiv Detail & Related papers (2023-04-01T09:07:12Z) - Interference Cancellation GAN Framework for Dynamic Channels [74.22393885274728]
We introduce an online training framework that can adapt to any changes in the channel.
Our framework significantly outperforms recent neural network models on highly dynamic channels.
arXiv Detail & Related papers (2022-08-17T02:01:18Z) - Dynamics-aware Adversarial Attack of 3D Sparse Convolution Network [75.1236305913734]
We investigate the dynamics-aware adversarial attack problem in deep neural networks.
Most existing adversarial attack algorithms are designed under a basic assumption -- the network architecture is fixed throughout the attack process.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
arXiv Detail & Related papers (2021-12-17T10:53:35Z) - Improving Neural Network Robustness through Neighborhood Preserving
Layers [0.751016548830037]
We demonstrate a novel neural network architecture which can incorporate such layers and also can be trained efficiently.
We empirically show that our designed network architecture is more robust against state-of-art gradient descent based attacks.
arXiv Detail & Related papers (2021-01-28T01:26:35Z) - Boosting Gradient for White-Box Adversarial Attacks [60.422511092730026]
We propose a universal adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient based white-box attack algorithms.
Our approach calculates the gradient of the loss function versus network input, maps the values to scores, and selects a part of them to update the misleading gradients.
arXiv Detail & Related papers (2020-10-21T02:13:26Z) - Hardware Accelerator for Adversarial Attacks on Deep Learning Neural
Networks [7.20382137043754]
A class of adversarial attack network algorithms has been proposed to generate robust physical perturbations.
In this paper, we propose the first hardware accelerator for adversarial attacks based on memristor crossbar arrays.
arXiv Detail & Related papers (2020-08-03T21:55:41Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.