GradMDM: Adversarial Attack on Dynamic Networks
- URL: http://arxiv.org/abs/2304.06724v1
- Date: Sat, 1 Apr 2023 09:07:12 GMT
- Title: GradMDM: Adversarial Attack on Dynamic Networks
- Authors: Jianhong Pan, Lin Geng Foo, Qichen Zheng, Zhipeng Fan, Hossein
Rahmani, Qiuhong Ke, Jun Liu
- Abstract summary: We attack dynamic models with our novel algorithm GradMDM.
GradMDM adjusts the direction and the magnitude of the gradients to effectively find a small perturbation for each input.
We evaluate GradMDM on multiple datasets and dynamic models, where it outperforms previous energy-oriented attack techniques.
- Score: 10.948810070861525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic neural networks can greatly reduce computation redundancy without
compromising accuracy by adapting their structures based on the input. In this
paper, we explore the robustness of dynamic neural networks against
energy-oriented attacks targeted at reducing their efficiency. Specifically, we
attack dynamic models with our novel algorithm GradMDM. GradMDM is a technique
that adjusts the direction and the magnitude of the gradients to effectively
find a small perturbation for each input, that will activate more computational
units of dynamic models during inference. We evaluate GradMDM on multiple
datasets and dynamic models, where it outperforms previous energy-oriented
attack techniques, significantly increasing computation complexity while
reducing the perceptibility of the perturbations.
Related papers
- A Riemannian Framework for Learning Reduced-order Lagrangian Dynamics [18.151022395233152]
We propose a novel geometric network architecture to learn physically-consistent reduced-order dynamic parameters.
Our approach enables accurate long-term predictions of the high-dimensional dynamics of rigid and deformable systems.
arXiv Detail & Related papers (2024-10-24T15:53:21Z) - Trapezoidal Gradient Descent for Effective Reinforcement Learning in Spiking Networks [10.422381897413263]
Spiking Neural Network (SNN) with their low energy consumption characteristics and performance have garnered widespread attention.
To reduce the energy consumption of practical applications of reinforcement learning, researchers have successively proposed the Pop-SAN and MDC-SAN algorithms.
We propose a trapezoidal approximation gradient method to replace the spike network, which not only preserves the original stable learning state but also enhances the model's adaptability and response sensitivity under various signal dynamics.
arXiv Detail & Related papers (2024-06-19T13:56:22Z) - Are GATs Out of Balance? [73.2500577189791]
We study the Graph Attention Network (GAT) in which a node's neighborhood aggregation is weighted by parameterized attention coefficients.
Our main theorem serves as a stepping stone to studying the learning dynamics of positive homogeneous models with attention mechanisms.
arXiv Detail & Related papers (2023-10-11T06:53:05Z) - The Underlying Correlated Dynamics in Neural Training [6.385006149689549]
Training of neural networks is a computationally intensive task.
We propose a model based on the correlation of the parameters' dynamics, which dramatically reduces the dimensionality.
This representation enhances the understanding of the underlying training dynamics and can pave the way for designing better acceleration techniques.
arXiv Detail & Related papers (2022-12-18T08:34:11Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Dynamics-aware Adversarial Attack of 3D Sparse Convolution Network [75.1236305913734]
We investigate the dynamics-aware adversarial attack problem in deep neural networks.
Most existing adversarial attack algorithms are designed under a basic assumption -- the network architecture is fixed throughout the attack process.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
arXiv Detail & Related papers (2021-12-17T10:53:35Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.