GradDiv: Adversarial Robustness of Randomized Neural Networks via
Gradient Diversity Regularization
- URL: http://arxiv.org/abs/2107.02425v1
- Date: Tue, 6 Jul 2021 06:57:40 GMT
- Title: GradDiv: Adversarial Robustness of Randomized Neural Networks via
Gradient Diversity Regularization
- Authors: Sungyoon Lee, Hoki Kim, Jaewook Lee
- Abstract summary: We investigate the effect of adversarial attacks using proxy gradients on randomized neural networks.
We show that proxy gradients are less effective when the gradients are more scattered.
We propose Gradient Diversity (GradDiv) regularizations that minimize the concentration of the gradients to build a robust neural network.
- Score: 3.9157051137215504
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep learning is vulnerable to adversarial examples. Many defenses based on
randomized neural networks have been proposed to solve the problem, but fail to
achieve robustness against attacks using proxy gradients such as the
Expectation over Transformation (EOT) attack. We investigate the effect of the
adversarial attacks using proxy gradients on randomized neural networks and
demonstrate that it highly relies on the directional distribution of the loss
gradients of the randomized neural network. We show in particular that proxy
gradients are less effective when the gradients are more scattered. To this
end, we propose Gradient Diversity (GradDiv) regularizations that minimize the
concentration of the gradients to build a robust randomized neural network. Our
experiments on MNIST, CIFAR10, and STL10 show that our proposed GradDiv
regularizations improve the adversarial robustness of randomized neural
networks against a variety of state-of-the-art attack methods. Moreover, our
method efficiently reduces the transferability among sample models of
randomized neural networks.
Related papers
- Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Robustness against Adversarial Attacks in Neural Networks using
Incremental Dissipativity [3.8673567847548114]
Adversarial examples can easily degrade the classification performance in neural networks.
This work proposes an incremental dissipativity-based robustness certificate for neural networks.
arXiv Detail & Related papers (2021-11-25T04:42:57Z) - Neural Network Adversarial Attack Method Based on Improved Genetic
Algorithm [0.0]
We propose a neural network adversarial attack method based on an improved genetic algorithm.
The method does not need the internal structure and parameter information of the neural network model.
arXiv Detail & Related papers (2021-10-05T04:46:16Z) - Differentially private training of neural networks with Langevin
dynamics forcalibrated predictive uncertainty [58.730520380312676]
We show that differentially private gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models.
This represents a serious issue for safety-critical applications, e.g. in medical diagnosis.
arXiv Detail & Related papers (2021-07-09T08:14:45Z) - Proxy Convexity: A Unified Framework for the Analysis of Neural Networks
Trained by Gradient Descent [95.94432031144716]
We propose a unified non- optimization framework for the analysis of a learning network.
We show that existing guarantees can be trained unified through gradient descent.
arXiv Detail & Related papers (2021-06-25T17:45:00Z) - Improving Transformation-based Defenses against Adversarial Examples
with First-order Perturbations [16.346349209014182]
Studies show that neural networks are susceptible to adversarial attacks.
This exposes a potential threat to neural network-based intelligent systems.
We propose a method for counteracting adversarial perturbations to improve adversarial robustness.
arXiv Detail & Related papers (2021-03-08T06:27:24Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z) - Semi-Implicit Back Propagation [1.5533842336139065]
We propose a semi-implicit back propagation method for neural network training.
The difference on the neurons are propagated in a backward fashion and the parameters are updated with proximal mapping.
Experiments on both MNIST and CIFAR-10 demonstrate that the proposed algorithm leads to better performance in terms of both loss decreasing and training/validation accuracy.
arXiv Detail & Related papers (2020-02-10T03:26:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.