Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial
Robustness
- URL: http://arxiv.org/abs/2112.02671v1
- Date: Sun, 5 Dec 2021 20:00:10 GMT
- Title: Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial
Robustness
- Authors: Konstantinos P. Panousis, Sotirios Chatzis, Sergios Theodoridis
- Abstract summary: This work explores the potency of competition-based activations, namely Local Winner-Takes-All (LWTA)
We replace the conventional Reversa-based nonlinearities with blocks comprising locally andally competing linear units.
As we experimentally show, the arising networks yield state-of-the-art robustness against powerful adversarial attacks.
- Score: 9.017401570529135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work explores the potency of stochastic competition-based activations,
namely Stochastic Local Winner-Takes-All (LWTA), against powerful
(gradient-based) white-box and black-box adversarial attacks; we especially
focus on Adversarial Training settings. In our work, we replace the
conventional ReLU-based nonlinearities with blocks comprising locally and
stochastically competing linear units. The output of each network layer now
yields a sparse output, depending on the outcome of winner sampling in each
block. We rely on the Variational Bayesian framework for training and
inference; we incorporate conventional PGD-based adversarial training arguments
to increase the overall adversarial robustness. As we experimentally show, the
arising networks yield state-of-the-art robustness against powerful adversarial
attacks while retaining very high classification rate in the benign case.
Related papers
- General Adversarial Defense Against Black-box Attacks via Pixel Level
and Feature Level Distribution Alignments [75.58342268895564]
We use Deep Generative Networks (DGNs) with a novel training mechanism to eliminate the distribution gap.
The trained DGNs align the distribution of adversarial samples with clean ones for the target DNNs by translating pixel values.
Our strategy demonstrates its unique effectiveness and generality against black-box attacks.
arXiv Detail & Related papers (2022-12-11T01:51:31Z) - Resisting Adversarial Attacks in Deep Neural Networks using Diverse
Decision Boundaries [12.312877365123267]
Deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify.
We develop a new ensemble-based solution that constructs defender models with diverse decision boundaries with respect to the original model.
We present extensive experimentations using standard image classification datasets, namely MNIST, CIFAR-10 and CIFAR-100 against state-of-the-art adversarial attacks.
arXiv Detail & Related papers (2022-08-18T08:19:26Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Competing Mutual Information Constraints with Stochastic
Competition-based Activations for Learning Diversified Representations [5.981521556433909]
This work aims to address the long-established problem of learning diversified representations.
We combine information-theoretic arguments with competition-based activations.
As we experimentally show, the resulting networks yield significant discnative representation learning abilities.
arXiv Detail & Related papers (2022-01-10T20:12:13Z) - Defensive Tensorization [113.96183766922393]
We propose tensor defensiveization, an adversarial defence technique that leverages a latent high-order factorization of the network.
We empirically demonstrate the effectiveness of our approach on standard image classification benchmarks.
We validate the versatility of our approach across domains and low-precision architectures by considering an audio task and binary networks.
arXiv Detail & Related papers (2021-10-26T17:00:16Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Combating Adversaries with Anti-Adversaries [118.70141983415445]
In particular, our layer generates an input perturbation in the opposite direction of the adversarial one.
We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models.
Our anti-adversary layer significantly enhances model robustness while coming at no cost on clean accuracy.
arXiv Detail & Related papers (2021-03-26T09:36:59Z) - Local Competition and Stochasticity for Adversarial Robustness in Deep
Learning [8.023314613846418]
This work addresses adversarial robustness in deep learning by considering deep networks with local winner-takes-all activations.
This type of network units result in sparse representations from each model layer, as the units are organized in blocks where only one unit generates a non-zero output.
arXiv Detail & Related papers (2021-01-04T17:40:52Z) - Robust Reinforcement Learning using Adversarial Populations [118.73193330231163]
Reinforcement Learning (RL) is an effective tool for controller design but can struggle with issues of robustness.
We show that using a single adversary does not consistently yield robustness to dynamics variations under standard parametrizations of the adversary.
We propose a population-based augmentation to the Robust RL formulation in which we randomly initialize a population of adversaries and sample from the population uniformly during training.
arXiv Detail & Related papers (2020-08-04T20:57:32Z) - Local Competition and Uncertainty for Adversarial Robustness in Deep
Learning [6.4649419408439766]
This work attempts to address adversarial robustness of deep networks by means of novel learning arguments.
Inspired by results in neuroscience, we propose a local competition principle as a means of adversarially-robust deep learning.
Our model achieves state-of-the-art results in powerful white-box attacks, while at the same time retaining its benign accuracy to a high degree.
arXiv Detail & Related papers (2020-06-18T15:41:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.