Parallel Capsule Networks for Classification of White Blood Cells
- URL: http://arxiv.org/abs/2108.02644v1
- Date: Thu, 5 Aug 2021 14:30:44 GMT
- Title: Parallel Capsule Networks for Classification of White Blood Cells
- Authors: Juan P. Vigueras-Guill\'en, Arijit Patra, Ola Engkvist, and Frank
Seeliger
- Abstract summary: Capsule Networks (CapsNets) is a machine learning architecture proposed to overcome some of the shortcomings of convolutional neural networks (CNNs)
We present a new architecture, parallel CapsNets, which exploits the concept of branching the network to isolate certain capsules.
- Score: 1.5749416770494706
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Capsule Networks (CapsNets) is a machine learning architecture proposed to
overcome some of the shortcomings of convolutional neural networks (CNNs).
However, CapsNets have mainly outperformed CNNs in datasets where images are
small and/or the objects to identify have minimal background noise. In this
work, we present a new architecture, parallel CapsNets, which exploits the
concept of branching the network to isolate certain capsules, allowing each
branch to identify different entities. We applied our concept to the two
current types of CapsNet architectures, studying the performance for networks
with different layers of capsules. We tested our design in a public, highly
unbalanced dataset of acute myeloid leukaemia images (15 classes). Our
experiments showed that conventional CapsNets show similar performance than our
baseline CNN (ResNeXt-50) but depict instability problems. In contrast,
parallel CapsNets can outperform ResNeXt-50, is more stable, and shows better
rotational invariance than both, conventional CapsNets and ResNeXt-50.
Related papers
- RobCaps: Evaluating the Robustness of Capsule Networks against Affine
Transformations and Adversarial Attacks [11.302789770501303]
Capsule Networks (CapsNets) are able to hierarchically preserve the pose relationships between multiple objects for image classification tasks.
In this paper, we evaluate different factors affecting the robustness of CapsNets, compared to traditional Conal Neural Networks (CNNs)
arXiv Detail & Related papers (2023-04-08T09:58:35Z) - MogaNet: Multi-order Gated Aggregation Network [64.16774341908365]
We propose a new family of modern ConvNets, dubbed MogaNet, for discriminative visual representation learning.
MogaNet encapsulates conceptually simple yet effective convolutions and gated aggregation into a compact module.
MogaNet exhibits great scalability, impressive efficiency of parameters, and competitive performance compared to state-of-the-art ViTs and ConvNets on ImageNet.
arXiv Detail & Related papers (2022-11-07T04:31:17Z) - Spiking CapsNet: A Spiking Neural Network With A Biologically Plausible
Routing Rule Between Capsules [9.658836348699161]
Spiking neural network (SNN) has attracted much attention due to their powerful-temporal information representation ability.
CapsNet does well in assembling and coupling different levels.
We propose Spiking CapsNet by introducing the capsules into the modelling of neural networks.
arXiv Detail & Related papers (2021-11-15T14:23:15Z) - Capsule Network is Not More Robust than Convolutional Network [21.55939814377377]
We study the special designs in CapsNet that differ from that of a ConvNet commonly used for image classification.
The study reveals that some designs, which are thought critical to CapsNet, actually can harm its robustness.
We propose enhanced ConvNets simply by introducing the essential components behind the CapsNet's success.
arXiv Detail & Related papers (2021-03-29T09:47:00Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Interpretable Graph Capsule Networks for Object Recognition [17.62514568986647]
We propose interpretable Graph Capsule Networks (GraCapsNets), where we replace the routing part with a multi-head attention-based Graph Pooling approach.
GraCapsNets achieve better classification performance with fewer parameters and better adversarial robustness, when compared to CapsNets.
arXiv Detail & Related papers (2020-12-03T03:18:00Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - iCapsNets: Towards Interpretable Capsule Networks for Text
Classification [95.31786902390438]
Traditional machine learning methods are easy to interpret but have low accuracies.
We propose interpretable capsule networks (iCapsNets) to bridge this gap.
iCapsNets can be interpreted both locally and globally.
arXiv Detail & Related papers (2020-05-16T04:11:44Z) - Q-CapsNets: A Specialized Framework for Quantizing Capsule Networks [12.022910298030219]
Capsule Networks (CapsNets) have superior learning capabilities in machine learning tasks, like image classification, compared to the traditional CNNs.
CapsNets require extremely intense computations and are difficult to be deployed in their original form at the resource-constrained edge devices.
This paper makes the first attempt to quantize CapsNet models, to enable their efficient edge implementations, by developing a specialized quantization framework for CapsNets.
arXiv Detail & Related papers (2020-04-15T14:32:45Z) - Improved Residual Networks for Image and Video Recognition [98.10703825716142]
Residual networks (ResNets) represent a powerful type of convolutional neural network (CNN) architecture.
We show consistent improvements in accuracy and learning convergence over the baseline.
Our proposed approach allows us to train extremely deep networks, while the baseline shows severe optimization issues.
arXiv Detail & Related papers (2020-04-10T11:09:50Z) - Subspace Capsule Network [85.69796543499021]
SubSpace Capsule Network (SCN) exploits the idea of capsule networks to model possible variations in the appearance or implicitly defined properties of an entity.
SCN can be applied to both discriminative and generative models without incurring computational overhead compared to CNN during test time.
arXiv Detail & Related papers (2020-02-07T17:51:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.