Efficient-CapsNet: Capsule Network with Self-Attention Routing
- URL: http://arxiv.org/abs/2101.12491v1
- Date: Fri, 29 Jan 2021 09:56:44 GMT
- Title: Efficient-CapsNet: Capsule Network with Self-Attention Routing
- Authors: Vittorio Mazzia, Francesco Salvetti, Marcello Chiaberge
- Abstract summary: Deep convolutional neural networks make extensive use of data augmentation techniques and layers with a high number of feature maps to embed object transformations.
capsule networks are a promising solution to extend current convolutional networks and endow artificial visual perception with a process to encode more efficiently all feature affine transformations.
In this paper, we investigate the efficiency of capsule networks and, pushing their capacity to the limits with an extreme architecture with barely 160K parameters, we prove that the proposed architecture is still able to achieve state-of-the-art results.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep convolutional neural networks, assisted by architectural design
strategies, make extensive use of data augmentation techniques and layers with
a high number of feature maps to embed object transformations. That is highly
inefficient and for large datasets implies a massive redundancy of features
detectors. Even though capsules networks are still in their infancy, they
constitute a promising solution to extend current convolutional networks and
endow artificial visual perception with a process to encode more efficiently
all feature affine transformations. Indeed, a properly working capsule network
should theoretically achieve higher results with a considerably lower number of
parameters count due to intrinsic capability to generalize to novel viewpoints.
Nevertheless, little attention has been given to this relevant aspect. In this
paper, we investigate the efficiency of capsule networks and, pushing their
capacity to the limits with an extreme architecture with barely 160K
parameters, we prove that the proposed architecture is still able to achieve
state-of-the-art results on three different datasets with only 2% of the
original CapsNet parameters. Moreover, we replace dynamic routing with a novel
non-iterative, highly parallelizable routing algorithm that can easily cope
with a reduced number of capsules. Extensive experimentation with other capsule
implementations has proved the effectiveness of our methodology and the
capability of capsule networks to efficiently embed visual representations more
prone to generalization.
Related papers
- Hierarchical Object-Centric Learning with Capsule Networks [0.0]
Capsule networks (CapsNets) were introduced to address convolutional neural networks limitations.
This thesis investigates the intriguing aspects of CapsNets and focuses on three key questions to unlock their full potential.
arXiv Detail & Related papers (2024-05-30T09:10:33Z) - ProtoCaps: A Fast and Non-Iterative Capsule Network Routing Method [6.028175460199198]
We introduce a novel, non-iterative routing mechanism for Capsule Networks.
We harness a shared Capsule subspace, negating the need to project each lower-level Capsule to each higher-level Capsule.
Our findings underscore the potential of our proposed methodology in enhancing the operational efficiency and performance of Capsule Networks.
arXiv Detail & Related papers (2023-07-19T12:39:40Z) - ME-CapsNet: A Multi-Enhanced Capsule Networks with Routing Mechanism [0.0]
This research focuses on bringing in a novel solution that uses sophisticated optimization for enhancing both the spatial and channel components inside each layer's receptive field.
We have proposed ME-CapsNet by introducing deeper convolutional layers to extract important features before passing through modules of capsule layers strategically.
The deeper convolutional layer includes blocks of Squeeze-Excitation networks which use a sampling approach for reconstructing their interdependencies without much loss of important feature information.
arXiv Detail & Related papers (2022-03-29T13:29:38Z) - Deformable Capsules for Object Detection [3.702343116848637]
We introduce a new family of capsule networks, deformable capsules (textitDeformCaps), to address a very important problem in computer vision: object detection.
We demonstrate that the proposed methods efficiently scale up to create the first-ever capsule network for object detection in the literature.
arXiv Detail & Related papers (2021-04-11T15:36:30Z) - CondenseNet V2: Sparse Feature Reactivation for Deep Networks [87.38447745642479]
Reusing features in deep networks through dense connectivity is an effective way to achieve high computational efficiency.
We propose an alternative approach named sparse feature reactivation (SFR), aiming at actively increasing the utility of features for reusing.
Our experiments show that the proposed models achieve promising performance on image classification (ImageNet and CIFAR) and object detection (MS COCO) in terms of both theoretical efficiency and practical speed.
arXiv Detail & Related papers (2021-04-09T14:12:43Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Structured Convolutions for Efficient Neural Network Design [65.36569572213027]
We tackle model efficiency by exploiting redundancy in the textitimplicit structure of the building blocks of convolutional neural networks.
We show how this decomposition can be applied to 2D and 3D kernels as well as the fully-connected layers.
arXiv Detail & Related papers (2020-08-06T04:38:38Z) - Wasserstein Routed Capsule Networks [90.16542156512405]
We propose a new parameter efficient capsule architecture, that is able to tackle complex tasks.
We show that our network is able to substantially outperform other capsule approaches by over 1.2 % on CIFAR-10.
arXiv Detail & Related papers (2020-07-22T14:38:05Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z) - Convolutional Networks with Dense Connectivity [59.30634544498946]
We introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion.
For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers.
We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks.
arXiv Detail & Related papers (2020-01-08T06:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.