RobCaps: Evaluating the Robustness of Capsule Networks against Affine
Transformations and Adversarial Attacks
- URL: http://arxiv.org/abs/2304.03973v2
- Date: Tue, 25 Apr 2023 10:35:37 GMT
- Title: RobCaps: Evaluating the Robustness of Capsule Networks against Affine
Transformations and Adversarial Attacks
- Authors: Alberto Marchisio and Antonio De Marco and Alessio Colucci and
Maurizio Martina and Muhammad Shafique
- Abstract summary: Capsule Networks (CapsNets) are able to hierarchically preserve the pose relationships between multiple objects for image classification tasks.
In this paper, we evaluate different factors affecting the robustness of CapsNets, compared to traditional Conal Neural Networks (CNNs)
- Score: 11.302789770501303
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Capsule Networks (CapsNets) are able to hierarchically preserve the pose
relationships between multiple objects for image classification tasks. Other
than achieving high accuracy, another relevant factor in deploying CapsNets in
safety-critical applications is the robustness against input transformations
and malicious adversarial attacks.
In this paper, we systematically analyze and evaluate different factors
affecting the robustness of CapsNets, compared to traditional Convolutional
Neural Networks (CNNs). Towards a comprehensive comparison, we test two CapsNet
models and two CNN models on the MNIST, GTSRB, and CIFAR10 datasets, as well as
on the affine-transformed versions of such datasets. With a thorough analysis,
we show which properties of these architectures better contribute to increasing
the robustness and their limitations. Overall, CapsNets achieve better
robustness against adversarial examples and affine transformations, compared to
a traditional CNN with a similar number of parameters. Similar conclusions have
been derived for deeper versions of CapsNets and CNNs. Moreover, our results
unleash a key finding that the dynamic routing does not contribute much to
improving the CapsNets' robustness. Indeed, the main generalization
contribution is due to the hierarchical feature learning through capsules.
Related papers
- From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks [82.21746840893658]
This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
We show that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary.
arXiv Detail & Related papers (2022-04-14T15:14:08Z) - Spiking CapsNet: A Spiking Neural Network With A Biologically Plausible
Routing Rule Between Capsules [9.658836348699161]
Spiking neural network (SNN) has attracted much attention due to their powerful-temporal information representation ability.
CapsNet does well in assembling and coupling different levels.
We propose Spiking CapsNet by introducing the capsules into the modelling of neural networks.
arXiv Detail & Related papers (2021-11-15T14:23:15Z) - Scalable Lipschitz Residual Networks with Convex Potential Flows [120.27516256281359]
We show that using convex potentials in a residual network gradient flow provides a built-in $1$-Lipschitz transformation.
A comprehensive set of experiments on CIFAR-10 demonstrates the scalability of our architecture and the benefit of our approach for $ell$ provable defenses.
arXiv Detail & Related papers (2021-10-25T07:12:53Z) - Security Analysis of Capsule Network Inference using Horizontal
Collaboration [0.5459797813771499]
Capsule network (CapsNet) can encode and preserve spatial orientation of input images.
CapsNet is vulnerable to several malicious attacks, as studied by several researchers in the literature.
arXiv Detail & Related papers (2021-09-22T21:04:20Z) - CPFN: Cascaded Primitive Fitting Networks for High-Resolution Point
Clouds [51.47100091540298]
We present Cascaded Primitive Fitting Networks (CPFN) that relies on an adaptive patch sampling network to assemble detection results of global and local primitive detection networks.
CPFN improves the state-of-the-art SPFN performance by 13-14% on high-resolution point cloud datasets and specifically improves the detection of fine-scale primitives by 20-22%.
arXiv Detail & Related papers (2021-08-31T23:27:33Z) - Parallel Capsule Networks for Classification of White Blood Cells [1.5749416770494706]
Capsule Networks (CapsNets) is a machine learning architecture proposed to overcome some of the shortcomings of convolutional neural networks (CNNs)
We present a new architecture, parallel CapsNets, which exploits the concept of branching the network to isolate certain capsules.
arXiv Detail & Related papers (2021-08-05T14:30:44Z) - CondenseNet V2: Sparse Feature Reactivation for Deep Networks [87.38447745642479]
Reusing features in deep networks through dense connectivity is an effective way to achieve high computational efficiency.
We propose an alternative approach named sparse feature reactivation (SFR), aiming at actively increasing the utility of features for reusing.
Our experiments show that the proposed models achieve promising performance on image classification (ImageNet and CIFAR) and object detection (MS COCO) in terms of both theoretical efficiency and practical speed.
arXiv Detail & Related papers (2021-04-09T14:12:43Z) - Capsule Network is Not More Robust than Convolutional Network [21.55939814377377]
We study the special designs in CapsNet that differ from that of a ConvNet commonly used for image classification.
The study reveals that some designs, which are thought critical to CapsNet, actually can harm its robustness.
We propose enhanced ConvNets simply by introducing the essential components behind the CapsNet's success.
arXiv Detail & Related papers (2021-03-29T09:47:00Z) - Interpretable Graph Capsule Networks for Object Recognition [17.62514568986647]
We propose interpretable Graph Capsule Networks (GraCapsNets), where we replace the routing part with a multi-head attention-based Graph Pooling approach.
GraCapsNets achieve better classification performance with fewer parameters and better adversarial robustness, when compared to CapsNets.
arXiv Detail & Related papers (2020-12-03T03:18:00Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.