Extreme Value Preserving Networks
- URL: http://arxiv.org/abs/2011.08367v1
- Date: Tue, 17 Nov 2020 02:06:52 GMT
- Title: Extreme Value Preserving Networks
- Authors: Mingjie Sun, Jianguo Li, Changshui Zhang
- Abstract summary: Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures.
This paper aims to leverage good properties of SIFT to renovate CNN architectures towards better accuracy and robustness.
- Score: 65.2037926048262
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent evidence shows that convolutional neural networks (CNNs) are biased
towards textures so that CNNs are non-robust to adversarial perturbations over
textures, while traditional robust visual features like SIFT (scale-invariant
feature transforms) are designed to be robust across a substantial range of
affine distortion, addition of noise, etc with the mimic of human perception
nature. This paper aims to leverage good properties of SIFT to renovate CNN
architectures towards better accuracy and robustness. We borrow the scale-space
extreme value idea from SIFT, and propose extreme value preserving networks
(EVPNets). Experiments demonstrate that EVPNets can achieve similar or better
accuracy than conventional CNNs, while achieving much better robustness on a
set of adversarial attacks (FGSM,PGD,etc) even without adversarial training.
Related papers
- Comprehensive Analysis of Network Robustness Evaluation Based on Convolutional Neural Networks with Spatial Pyramid Pooling [4.366824280429597]
Connectivity robustness, a crucial aspect for understanding, optimizing, and repairing complex networks, has traditionally been evaluated through simulations.
We address these challenges by designing a convolutional neural networks (CNN) model with spatial pyramid pooling networks (SPP-net)
We show that the proposed CNN model consistently achieves accurate evaluations of both attack curves and robustness values across all removal scenarios.
arXiv Detail & Related papers (2023-08-10T09:54:22Z) - From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks [82.21746840893658]
This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
We show that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary.
arXiv Detail & Related papers (2022-04-14T15:14:08Z) - Wiggling Weights to Improve the Robustness of Classifiers [2.1485350418225244]
We show that wiggling the weights consistently improves classification.
We conclude that wiggled transform-augmented networks acquire good robustness even for perturbations not seen during training.
arXiv Detail & Related papers (2021-11-18T16:20:36Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z) - The shape and simplicity biases of adversarially robust ImageNet-trained
CNNs [9.707679445925516]
We study the shape bias and internal mechanisms that enable the generalizability of AlexNet, GoogLeNet, and ResNet-50 models trained via adversarial training.
Remarkably, adversarial training induces three simplicity biases into hidden neurons in the process of "robustifying" CNNs.
arXiv Detail & Related papers (2020-06-16T16:38:16Z) - Hold me tight! Influence of discriminative features on deep network
boundaries [63.627760598441796]
We propose a new perspective that relates dataset features to the distance of samples to the decision boundary.
This enables us to carefully tweak the position of the training samples and measure the induced changes on the boundaries of CNNs trained on large-scale vision datasets.
arXiv Detail & Related papers (2020-02-15T09:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.