Exploiting Vulnerability of Pooling in Convolutional Neural Networks by
Strict Layer-Output Manipulation for Adversarial Attacks
- URL: http://arxiv.org/abs/2012.11413v1
- Date: Mon, 21 Dec 2020 15:18:41 GMT
- Title: Exploiting Vulnerability of Pooling in Convolutional Neural Networks by
Strict Layer-Output Manipulation for Adversarial Attacks
- Authors: Chenchen Zhao and Hao Li
- Abstract summary: Convolutional neural networks (CNN) have been more and more applied in mobile robotics such as intelligent vehicles.
Security of CNNs in robotics applications is an important issue, for which potential adversarial attacks on CNNs are worth research.
In this paper, we conduct adversarial attacks on CNNs from the perspective of network structure by investigating and exploiting the vulnerability of pooling.
- Score: 7.540176446791261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNN) have been more and more applied in mobile
robotics such as intelligent vehicles. Security of CNNs in robotics
applications is an important issue, for which potential adversarial attacks on
CNNs are worth research. Pooling is a typical step of dimension reduction and
information discarding in CNNs. Such information discarding may result in
mis-deletion and mis-preservation of data features which largely influence the
output of the network. This may aggravate the vulnerability of CNNs to
adversarial attacks. In this paper, we conduct adversarial attacks on CNNs from
the perspective of network structure by investigating and exploiting the
vulnerability of pooling. First, a novel adversarial attack methodology named
Strict Layer-Output Manipulation (SLOM) is proposed. Then an attack method
based on Strict Pooling Manipulation (SPM) which is an instantiation of the
SLOM spirit is designed to effectively realize both type I and type II
adversarial attacks on a target CNN. Performances of attacks based on SPM at
different depths are also investigated and compared. Moreover, performances of
attack methods designed by instantiating the SLOM spirit with different
operation layers of CNNs are compared. Experiment results reflect that pooling
tends to be more vulnerable to adversarial attacks than other operations in
CNNs.
Related papers
- Impact of White-Box Adversarial Attacks on Convolutional Neural Networks [0.6138671548064356]
We investigate the susceptibility of Convolutional Neural Networks (CNNs) to white-box adversarial attacks.
Our study provides insights into the robustness of CNNs against adversarial threats.
arXiv Detail & Related papers (2024-10-02T21:24:08Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial
Detection [22.99930028876662]
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks.
Current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system.
We propose a simple and light-weight detector, which leverages recent findings on the relation between networks' local intrinsic dimensionality (LID) and adversarial attacks.
arXiv Detail & Related papers (2022-12-13T17:51:32Z) - Demystifying the Transferability of Adversarial Attacks in Computer
Networks [23.80086861061094]
CNN-based models are subject to various adversarial attacks.
Some adversarial examples could potentially still be effective against different unknown models.
This paper assesses the robustness of CNN-based models against adversarial transferability.
arXiv Detail & Related papers (2021-10-09T07:20:44Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - The Effect of Class Definitions on the Transferability of Adversarial
Attacks Against Forensic CNNs [24.809185168969066]
We show that adversarial attacks against CNNs trained to identify image manipulation fail to transfer to CNNs whose only difference is in the class definitions.
This has important implications for the future design of forensic CNNs that are robust to adversarial and anti-forensic attacks.
arXiv Detail & Related papers (2021-01-26T20:59:37Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z) - Transferable Perturbations of Deep Feature Distributions [102.94094966908916]
This work presents a new adversarial attack based on the modeling and exploitation of class-wise and layer-wise deep feature distributions.
We achieve state-of-the-art targeted blackbox transfer-based attack results for undefended ImageNet models.
arXiv Detail & Related papers (2020-04-27T00:32:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.