Robust and Efficient Interference Neural Networks for Defending Against
Adversarial Attacks in ImageNet
- URL: http://arxiv.org/abs/2310.05947v1
- Date: Sun, 3 Sep 2023 14:20:58 GMT
- Title: Robust and Efficient Interference Neural Networks for Defending Against
Adversarial Attacks in ImageNet
- Authors: Yunuo Xiong, Shujuan Liu, Hongwei Xiong
- Abstract summary: In this paper, we construct an interference neural network by applying additional background images and corresponding labels.
Compared with the state-of-the-art results under the PGD attack, it has a better defense effect with much smaller computing resources.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The existence of adversarial images has seriously affected the task of image
recognition and practical application of deep learning, it is also a key
scientific problem that deep learning urgently needs to solve. By far the most
effective approach is to train the neural network with a large number of
adversarial examples. However, this adversarial training method requires a huge
amount of computing resources when applied to ImageNet, and has not yet
achieved satisfactory results for high-intensity adversarial attacks. In this
paper, we construct an interference neural network by applying additional
background images and corresponding labels, and use pre-trained ResNet-152 to
efficiently complete the training. Compared with the state-of-the-art results
under the PGD attack, it has a better defense effect with much smaller
computing resources. This work provides new ideas for academic research and
practical applications of effective defense against adversarial attacks.
Related papers
- Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Graph Neural Networks for Decentralized Multi-Agent Perimeter Defense [111.9039128130633]
We develop an imitation learning framework that learns a mapping from defenders' local perceptions and their communication graph to their actions.
We run perimeter defense games in scenarios with different team sizes and configurations to demonstrate the performance of the learned network.
arXiv Detail & Related papers (2023-01-23T19:35:59Z) - Deep Bayesian Image Set Classification: A Defence Approach against
Adversarial Attacks [32.48820298978333]
Deep neural networks (DNNs) are susceptible to be fooled with nearly high confidence by an adversary.
In practice, the vulnerability of deep learning systems against carefully perturbed images, known as adversarial examples, poses a dire security threat in the physical world applications.
We propose a robust deep Bayesian image set classification as a defence framework against a broad range of adversarial attacks.
arXiv Detail & Related papers (2021-08-23T14:52:44Z) - THAT: Two Head Adversarial Training for Improving Robustness at Scale [126.06873298511425]
We propose Two Head Adversarial Training (THAT), a two-stream adversarial learning network that is designed to handle the large-scale many-class ImageNet dataset.
The proposed method trains a network with two heads and two loss functions; one to minimize feature-space domain shift between natural and adversarial images, and one to promote high classification accuracy.
arXiv Detail & Related papers (2021-03-25T05:32:38Z) - AdvFoolGen: Creating Persistent Troubles for Deep Classifiers [17.709146615433458]
We present a new black-box attack termed AdvFoolGen, which can generate attacking images from the same feature space as that of the natural images.
We demonstrate the effectiveness and robustness of our attack in the face of state-of-the-art defense techniques.
arXiv Detail & Related papers (2020-07-20T21:27:41Z) - Evaluating a Simple Retraining Strategy as a Defense Against Adversarial
Attacks [17.709146615433458]
We show how simple algorithms like KNN can be used to determine the labels of the adversarial images needed for retraining.
We present the results on two standard datasets namely, CIFAR-10 and TinyImageNet.
arXiv Detail & Related papers (2020-07-20T07:49:33Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z) - BP-DIP: A Backprojection based Deep Image Prior [49.375539602228415]
We propose two image restoration approaches: (i) Deep Image Prior (DIP), which trains a convolutional neural network (CNN) from scratch in test time using the degraded image; and (ii) a backprojection (BP) fidelity term, which is an alternative to the standard least squares loss that is usually used in previous DIP works.
We demonstrate the performance of the proposed method, termed BP-DIP, on the deblurring task and show its advantages over the plain DIP, with both higher PSNR values and better inference run-time.
arXiv Detail & Related papers (2020-03-11T17:09:12Z) - HYDRA: Pruning Adversarially Robust Neural Networks [58.061681100058316]
Deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size.
We propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune.
We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously.
arXiv Detail & Related papers (2020-02-24T19:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.