Recurrent Attention Model with Log-Polar Mapping is Robust against
Adversarial Attacks
- URL: http://arxiv.org/abs/2002.05388v1
- Date: Thu, 13 Feb 2020 08:40:48 GMT
- Title: Recurrent Attention Model with Log-Polar Mapping is Robust against
Adversarial Attacks
- Authors: Taro Kiritani, Koji Ono
- Abstract summary: We develop a novel artificial neural network model that recurrently collects data with a log-polar field of view controlled by attention.
We demonstrate the effectiveness of this design as a defense against SPSA and PGD adversarial attacks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks are vulnerable to small $\ell^p$ adversarial
attacks, while the human visual system is not. Inspired by neural networks in
the eye and the brain, we developed a novel artificial neural network model
that recurrently collects data with a log-polar field of view that is
controlled by attention. We demonstrate the effectiveness of this design as a
defense against SPSA and PGD adversarial attacks. It also has beneficial
properties observed in the animal visual system, such as reflex-like pathways
for low-latency inference, fixed amount of computation independent of image
size, and rotation and scale invariance. The code for experiments is available
at https://gitlab.com/exwzd-public/kiritani_ono_2020.
Related papers
- Finite Gaussian Neurons: Defending against adversarial attacks by making
neural networks say "I don't know" [0.0]
I introduce the Finite Gaussian Neuron (FGN), a novel neuron architecture for artificial neural networks.
My works aims to: - easily convert existing models to FGN architecture, - while preserving the existing model's behavior on real data, - and offering resistance against adversarial attacks.
arXiv Detail & Related papers (2023-06-13T14:17:25Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - A New Kind of Adversarial Example [47.64219291655723]
A large enough perturbation is added to an image such that a model maintains its original decision, whereas a human will most likely make a mistake if forced to decide.
Our proposed attack, dubbed NKE, is similar in essence to the fooling images, but is more efficient since it uses gradient descent instead of evolutionary algorithms.
arXiv Detail & Related papers (2022-08-04T03:45:44Z) - CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of
Adversarial Robustness of Vision Models [61.68061613161187]
This paper presents CARLA-GeAR, a tool for the automatic generation of synthetic datasets for evaluating the robustness of neural models against physical adversarial patches.
The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving.
The paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world.
arXiv Detail & Related papers (2022-06-09T09:17:38Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection
and Mitigation in Deep Neural Networks [22.900501880865658]
Backdoor attacks impose a new threat in Deep Neural Networks (DNNs)
We propose PiDAn, an algorithm based on coherence optimization purifying the poisoned data.
Our PiDAn algorithm can detect more than 90% infected classes and identify 95% poisoned samples.
arXiv Detail & Related papers (2022-03-17T12:37:21Z) - Efficient and Robust Classification for Sparse Attacks [34.48667992227529]
We consider perturbations bounded by the $ell$--norm, which have been shown as effective attacks in the domains of image-recognition, natural language processing, and malware-detection.
We propose a novel defense method that consists of "truncation" and "adrial training"
Motivated by the insights we obtain, we extend these components to neural network classifiers.
arXiv Detail & Related papers (2022-01-23T21:18:17Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Deep neural network loses attention to adversarial images [11.650381752104296]
Adversarial algorithms have shown to be effective against neural networks for a variety of tasks.
We show that in the case of Pixel Attack, perturbed pixels call the network attention to themselves or divert the attention from them.
We also show that both attacks affect the saliency map and activation maps differently.
arXiv Detail & Related papers (2021-06-10T11:06:17Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - A Neuro-Inspired Autoencoding Defense Against Adversarial Perturbations [11.334887948796611]
Deep Neural Networks (DNNs) are vulnerable to adversarial attacks.
Most effective current defense is to train the network using adversarially perturbed examples.
In this paper, we investigate a radically different, neuro-inspired defense mechanism.
arXiv Detail & Related papers (2020-11-21T21:03:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.