Adversarial robustness via stochastic regularization of neural
activation sensitivity
- URL: http://arxiv.org/abs/2009.11349v1
- Date: Wed, 23 Sep 2020 19:31:55 GMT
- Title: Adversarial robustness via stochastic regularization of neural
activation sensitivity
- Authors: Gil Fidel, Ron Bitton, Ziv Katzir, Asaf Shabtai
- Abstract summary: We suggest a novel defense mechanism that simultaneously addresses both defense goals.
We flatten the gradients of the loss surface, making adversarial examples harder to find.
In addition, we push the decision away from correctly classified inputs by leveraging Jacobian regularization.
- Score: 24.02105949163359
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have shown that the input domain of any machine learning
classifier is bound to contain adversarial examples. Thus we can no longer hope
to immune classifiers against adversarial examples and instead can only aim to
achieve the following two defense goals: 1) making adversarial examples harder
to find, or 2) weakening their adversarial nature by pushing them further away
from correctly classified data points. Most if not all the previously suggested
defense mechanisms attend to just one of those two goals, and as such, could be
bypassed by adaptive attacks that take the defense mechanism into
consideration. In this work we suggest a novel defense mechanism that
simultaneously addresses both defense goals: We flatten the gradients of the
loss surface, making adversarial examples harder to find, using a novel
stochastic regularization term that explicitly decreases the sensitivity of
individual neurons to small input perturbations. In addition, we push the
decision boundary away from correctly classified inputs by leveraging Jacobian
regularization. We present a solid theoretical basis and an empirical testing
of our suggested approach, demonstrate its superiority over previously
suggested defense mechanisms, and show that it is effective against a wide
range of adaptive attacks.
Related papers
- Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - TREATED:Towards Universal Defense against Textual Adversarial Attacks [28.454310179377302]
We propose TREATED, a universal adversarial detection method that can defend against attacks of various perturbation levels without making any assumptions.
Extensive experiments on three competitive neural networks and two widely used datasets show that our method achieves better detection performance than baselines.
arXiv Detail & Related papers (2021-09-13T03:31:20Z) - Searching for an Effective Defender: Benchmarking Defense against
Adversarial Word Substitution [83.84968082791444]
Deep neural networks are vulnerable to intentionally crafted adversarial examples.
Various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
arXiv Detail & Related papers (2021-08-29T08:11:36Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Adversarial Feature Desensitization [12.401175943131268]
We propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field.
Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs.
arXiv Detail & Related papers (2020-06-08T14:20:02Z) - RAID: Randomized Adversarial-Input Detection for Neural Networks [7.37305608518763]
We propose a novel technique for adversarial-image detection, RAID, that trains a secondary classifier to identify differences in neuron activation values between benign and adversarial inputs.
RAID is more reliable and more effective than the state of the art when evaluated against six popular attacks.
arXiv Detail & Related papers (2020-02-07T13:27:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.