Effective and Efficient Vote Attack on Capsule Networks
- URL: http://arxiv.org/abs/2102.10055v1
- Date: Fri, 19 Feb 2021 17:35:07 GMT
- Title: Effective and Efficient Vote Attack on Capsule Networks
- Authors: Jindong Gu, Baoyuan Wu, Volker Tresp
- Abstract summary: Capsule Networks (CapsNets) are shown to be more robust to white-box attacks than CNNs under popular attack protocols.
In this work, we investigate the adversarial robustness of CapsNets, especially how the inner workings of CapsNets change when the output capsules are attacked.
We propose a novel vote attack where we attack votes of CapsNets directly.
- Score: 37.78858778236326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standard Convolutional Neural Networks (CNNs) can be easily fooled by images
with small quasi-imperceptible artificial perturbations. As alternatives to
CNNs, the recently proposed Capsule Networks (CapsNets) are shown to be more
robust to white-box attacks than CNNs under popular attack protocols. Besides,
the class-conditional reconstruction part of CapsNets is also used to detect
adversarial examples. In this work, we investigate the adversarial robustness
of CapsNets, especially how the inner workings of CapsNets change when the
output capsules are attacked. The first observation is that adversarial
examples misled CapsNets by manipulating the votes from primary capsules.
Another observation is the high computational cost, when we directly apply
multi-step attack methods designed for CNNs to attack CapsNets, due to the
computationally expensive routing mechanism. Motivated by these two
observations, we propose a novel vote attack where we attack votes of CapsNets
directly. Our vote attack is not only effective but also efficient by
circumventing the routing process. Furthermore, we integrate our vote attack
into the detection-aware attack paradigm, which can successfully bypass the
class-conditional reconstruction based detection method. Extensive experiments
demonstrate the superior attack performance of our vote attack on CapsNets.
Related papers
- Cost Aware Untargeted Poisoning Attack against Graph Neural Networks, [5.660584039688214]
We propose a novel attack loss framework called the Cost Aware Poisoning Attack (CA-attack) to improve the allocation of the attack budget.
Our experiments demonstrate that the proposed CA-attack significantly enhances existing attack strategies.
arXiv Detail & Related papers (2023-12-12T10:54:02Z) - RobCaps: Evaluating the Robustness of Capsule Networks against Affine
Transformations and Adversarial Attacks [11.302789770501303]
Capsule Networks (CapsNets) are able to hierarchically preserve the pose relationships between multiple objects for image classification tasks.
In this paper, we evaluate different factors affecting the robustness of CapsNets, compared to traditional Conal Neural Networks (CNNs)
arXiv Detail & Related papers (2023-04-08T09:58:35Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - Security Analysis of Capsule Network Inference using Horizontal
Collaboration [0.5459797813771499]
Capsule network (CapsNet) can encode and preserve spatial orientation of input images.
CapsNet is vulnerable to several malicious attacks, as studied by several researchers in the literature.
arXiv Detail & Related papers (2021-09-22T21:04:20Z) - Parallel Capsule Networks for Classification of White Blood Cells [1.5749416770494706]
Capsule Networks (CapsNets) is a machine learning architecture proposed to overcome some of the shortcomings of convolutional neural networks (CNNs)
We present a new architecture, parallel CapsNets, which exploits the concept of branching the network to isolate certain capsules.
arXiv Detail & Related papers (2021-08-05T14:30:44Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Cortical Features for Defense Against Adversarial Audio Attacks [55.61885805423492]
We propose using a computational model of the auditory cortex as a defense against adversarial attacks on audio.
We show that the cortical features help defend against universal adversarial examples.
arXiv Detail & Related papers (2021-01-30T21:21:46Z) - Local Black-box Adversarial Attacks: A Query Efficient Approach [64.98246858117476]
Adrial attacks have threatened the application of deep neural networks in security-sensitive scenarios.
We propose a novel framework to perturb the discriminative areas of clean examples only within limited queries in black-box attacks.
We conduct extensive experiments to show that our framework can significantly improve the query efficiency during black-box perturbing with a high attack success rate.
arXiv Detail & Related papers (2021-01-04T15:32:16Z) - Interpretable Graph Capsule Networks for Object Recognition [17.62514568986647]
We propose interpretable Graph Capsule Networks (GraCapsNets), where we replace the routing part with a multi-head attention-based Graph Pooling approach.
GraCapsNets achieve better classification performance with fewer parameters and better adversarial robustness, when compared to CapsNets.
arXiv Detail & Related papers (2020-12-03T03:18:00Z) - Backdoor Attacks to Graph Neural Networks [73.56867080030091]
We propose the first backdoor attack to graph neural networks (GNN)
In our backdoor attack, a GNN predicts an attacker-chosen target label for a testing graph once a predefined subgraph is injected to the testing graph.
Our empirical results show that our backdoor attacks are effective with a small impact on a GNN's prediction accuracy for clean testing graphs.
arXiv Detail & Related papers (2020-06-19T14:51:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.