Security Analysis of Capsule Network Inference using Horizontal
Collaboration
- URL: http://arxiv.org/abs/2109.11041v1
- Date: Wed, 22 Sep 2021 21:04:20 GMT
- Title: Security Analysis of Capsule Network Inference using Horizontal
Collaboration
- Authors: Adewale Adeyemo, Faiq Khalid, Tolulope A. Odetola, and Syed Rafay
Hasan
- Abstract summary: Capsule network (CapsNet) can encode and preserve spatial orientation of input images.
CapsNet is vulnerable to several malicious attacks, as studied by several researchers in the literature.
- Score: 0.5459797813771499
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The traditional convolution neural networks (CNN) have several drawbacks like
the Picasso effect and the loss of information by the pooling layer. The
Capsule network (CapsNet) was proposed to address these challenges because its
architecture can encode and preserve the spatial orientation of input images.
Similar to traditional CNNs, CapsNet is also vulnerable to several malicious
attacks, as studied by several researchers in the literature. However, most of
these studies focus on single-device-based inference, but horizontally
collaborative inference in state-of-the-art systems, like intelligent edge
services in self-driving cars, voice controllable systems, and drones, nullify
most of these analyses. Horizontal collaboration implies partitioning the
trained CNN models or CNN tasks to multiple end devices or edge nodes.
Therefore, it is imperative to examine the robustness of the CapsNet against
malicious attacks when deployed in horizontally collaborative environments.
Towards this, we examine the robustness of the CapsNet when subjected to
noise-based inference attacks in a horizontal collaborative environment. In
this analysis, we perturbed the feature maps of the different layers of four
DNN models, i.e., CapsNet, Mini-VGG, LeNet, and an in-house designed CNN
(ConvNet) with the same number of parameters as CapsNet, using two types of
noised-based attacks, i.e., Gaussian Noise Attack and FGSM noise attack. The
experimental results show that similar to the traditional CNNs, depending upon
the access of the attacker to the DNN layer, the classification accuracy of the
CapsNet drops significantly. For example, when Gaussian Noise Attack
classification is performed at the DigitCap layer of the CapsNet, the maximum
classification accuracy drop is approximately 97%.
Related papers
- Capsule Neural Networks as Noise Stabilizer for Time Series Data [20.29049860598735]
Capsule Neural Networks utilize capsules, which bind neurons into a single vector and learn position equivariant features.
In this paper, we investigate the effectiveness of CapsNets in analyzing highly sensitive and noisy time series sensor data.
arXiv Detail & Related papers (2024-03-20T12:17:49Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - RobCaps: Evaluating the Robustness of Capsule Networks against Affine
Transformations and Adversarial Attacks [11.302789770501303]
Capsule Networks (CapsNets) are able to hierarchically preserve the pose relationships between multiple objects for image classification tasks.
In this paper, we evaluate different factors affecting the robustness of CapsNets, compared to traditional Conal Neural Networks (CNNs)
arXiv Detail & Related papers (2023-04-08T09:58:35Z) - Parallel Capsule Networks for Classification of White Blood Cells [1.5749416770494706]
Capsule Networks (CapsNets) is a machine learning architecture proposed to overcome some of the shortcomings of convolutional neural networks (CNNs)
We present a new architecture, parallel CapsNets, which exploits the concept of branching the network to isolate certain capsules.
arXiv Detail & Related papers (2021-08-05T14:30:44Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Effective and Efficient Vote Attack on Capsule Networks [37.78858778236326]
Capsule Networks (CapsNets) are shown to be more robust to white-box attacks than CNNs under popular attack protocols.
In this work, we investigate the adversarial robustness of CapsNets, especially how the inner workings of CapsNets change when the output capsules are attacked.
We propose a novel vote attack where we attack votes of CapsNets directly.
arXiv Detail & Related papers (2021-02-19T17:35:07Z) - Defence against adversarial attacks using classical and quantum-enhanced
Boltzmann machines [64.62510681492994]
generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations.
We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset.
arXiv Detail & Related papers (2020-12-21T19:00:03Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.