Training Deep Capsule Networks with Residual Connections
- URL: http://arxiv.org/abs/2104.07393v1
- Date: Thu, 15 Apr 2021 11:42:44 GMT
- Title: Training Deep Capsule Networks with Residual Connections
- Authors: Josef Gugglberger, David Peer, Antonio Rodriguez-Sanchez
- Abstract summary: Capsule networks are a type of neural network that have recently gained increased popularity.
They consist of groups of neurons, called capsules, which encode properties of objects or object parts.
Most capsule network implementations use two to three capsule layers, which limits their applicability as expressivity grows exponentially with depth.
We propose a methodology to train deeper capsule networks using residual connections, which is evaluated on four datasets and three different routing algorithms.
Our experimental results show that in fact, performance increases when training deeper capsule networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Capsule networks are a type of neural network that have recently gained
increased popularity. They consist of groups of neurons, called capsules, which
encode properties of objects or object parts. The connections between capsules
encrypt part-whole relationships between objects through routing algorithms
which route the output of capsules from lower level layers to upper level
layers. Capsule networks can reach state-of-the-art results on many challenging
computer vision tasks, such as MNIST, Fashion-MNIST, and Small-NORB. However,
most capsule network implementations use two to three capsule layers, which
limits their applicability as expressivity grows exponentially with depth. One
approach to overcome such limitations would be to train deeper network
architectures, as it has been done for convolutional neural networks with much
increased success. In this paper, we propose a methodology to train deeper
capsule networks using residual connections, which is evaluated on four
datasets and three different routing algorithms. Our experimental results show
that in fact, performance increases when training deeper capsule networks. The
source code is available on https://github.com/moejoe95/res-capsnet.
Related papers
- Hierarchical Object-Centric Learning with Capsule Networks [0.0]
Capsule networks (CapsNets) were introduced to address convolutional neural networks limitations.
This thesis investigates the intriguing aspects of CapsNets and focuses on three key questions to unlock their full potential.
arXiv Detail & Related papers (2024-05-30T09:10:33Z) - Deep multi-prototype capsule networks [0.3823356975862005]
Capsule networks are a type of neural network that identify image parts and form the instantiation parameters of a whole hierarchically.
This paper presents a multi-prototype architecture for guiding capsule networks to represent the variations in the image parts.
The experimental results on MNIST, SVHN, C-Cube, CEDAR, MCYT, and UTSig datasets reveal that the proposed model outperforms others regarding image classification accuracy.
arXiv Detail & Related papers (2024-04-23T18:37:37Z) - Active search and coverage using point-cloud reinforcement learning [50.741409008225766]
This paper presents an end-to-end deep reinforcement learning solution for target search and coverage.
We show that deep hierarchical feature learning works for RL and that by using farthest point sampling (FPS) we can reduce the amount of points.
We also show that multi-head attention for point-clouds helps to learn the agent faster but converges to the same outcome.
arXiv Detail & Related papers (2023-12-18T18:16:30Z) - Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural
Networks [49.808194368781095]
We show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks.
This work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
arXiv Detail & Related papers (2023-05-11T17:19:30Z) - Learning with Capsules: A Survey [73.31150426300198]
Capsule networks were proposed as an alternative approach to Convolutional Neural Networks (CNNs) for learning object-centric representations.
Unlike CNNs, capsule networks are designed to explicitly model part-whole hierarchical relationships.
arXiv Detail & Related papers (2022-06-06T15:05:36Z) - 3DConvCaps: 3DUnet with Convolutional Capsule Encoder for Medical Image
Segmentation [1.863532786702135]
We propose a 3D encoder-decoder network with Convolutional Capsule (called 3DConvCaps) to learn lower-level features (short-range attention) with convolutional layers.
Our experiments on multiple datasets including iSeg-2017, Hippocampus, and Cardiac demonstrate that our 3D 3DConvCaps network considerably outperforms previous capsule networks and 3D-UNets.
arXiv Detail & Related papers (2022-05-19T03:00:04Z) - Routing with Self-Attention for Multimodal Capsule Networks [108.85007719132618]
We present a new multimodal capsule network that allows us to leverage the strength of capsules in the context of a multimodal learning framework.
To adapt the capsules to large-scale input data, we propose a novel routing by self-attention mechanism that selects relevant capsules.
This allows not only for robust training with noisy video data, but also to scale up the size of the capsule network compared to traditional routing methods.
arXiv Detail & Related papers (2021-12-01T19:01:26Z) - Efficient-CapsNet: Capsule Network with Self-Attention Routing [0.0]
Deep convolutional neural networks make extensive use of data augmentation techniques and layers with a high number of feature maps to embed object transformations.
capsule networks are a promising solution to extend current convolutional networks and endow artificial visual perception with a process to encode more efficiently all feature affine transformations.
In this paper, we investigate the efficiency of capsule networks and, pushing their capacity to the limits with an extreme architecture with barely 160K parameters, we prove that the proposed architecture is still able to achieve state-of-the-art results.
arXiv Detail & Related papers (2021-01-29T09:56:44Z) - Wasserstein Routed Capsule Networks [90.16542156512405]
We propose a new parameter efficient capsule architecture, that is able to tackle complex tasks.
We show that our network is able to substantially outperform other capsule approaches by over 1.2 % on CIFAR-10.
arXiv Detail & Related papers (2020-07-22T14:38:05Z) - Subspace Capsule Network [85.69796543499021]
SubSpace Capsule Network (SCN) exploits the idea of capsule networks to model possible variations in the appearance or implicitly defined properties of an entity.
SCN can be applied to both discriminative and generative models without incurring computational overhead compared to CNN during test time.
arXiv Detail & Related papers (2020-02-07T17:51:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.