OrthCaps: An Orthogonal CapsNet with Sparse Attention Routing and Pruning
- URL: http://arxiv.org/abs/2403.13351v1
- Date: Wed, 20 Mar 2024 07:25:24 GMT
- Title: OrthCaps: An Orthogonal CapsNet with Sparse Attention Routing and Pruning
- Authors: Xinyu Geng, Jiaming Wang, Jiawei Gong, Yuerong Xue, Jun Xu, Fanglin Chen, Xiaolin Huang,
- Abstract summary: Redundancy is a persistent challenge in Capsule Networks (CapsNet)
We propose an Orthogonal Capsule Network (OrthCaps) to reduce redundancy, improve routing performance and decrease parameter counts.
- Score: 21.5857226735951
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Redundancy is a persistent challenge in Capsule Networks (CapsNet),leading to high computational costs and parameter counts. Although previous works have introduced pruning after the initial capsule layer, dynamic routing's fully connected nature and non-orthogonal weight matrices reintroduce redundancy in deeper layers. Besides, dynamic routing requires iterating to converge, further increasing computational demands. In this paper, we propose an Orthogonal Capsule Network (OrthCaps) to reduce redundancy, improve routing performance and decrease parameter counts. Firstly, an efficient pruned capsule layer is introduced to discard redundant capsules. Secondly, dynamic routing is replaced with orthogonal sparse attention routing, eliminating the need for iterations and fully connected structures. Lastly, weight matrices during routing are orthogonalized to sustain low capsule similarity, which is the first approach to introduce orthogonality into CapsNet as far as we know. Our experiments on baseline datasets affirm the efficiency and robustness of OrthCaps in classification tasks, in which ablation studies validate the criticality of each component. Remarkably, OrthCaps-Shallow outperforms other Capsule Network benchmarks on four datasets, utilizing only 110k parameters, which is a mere 1.25% of a standard Capsule Network's total. To the best of our knowledge, it achieves the smallest parameter count among existing Capsule Networks. Similarly, OrthCaps-Deep demonstrates competitive performance across four datasets, utilizing only 1.2% of the parameters required by its counterparts.
Related papers
- Hierarchical Object-Centric Learning with Capsule Networks [0.0]
Capsule networks (CapsNets) were introduced to address convolutional neural networks limitations.
This thesis investigates the intriguing aspects of CapsNets and focuses on three key questions to unlock their full potential.
arXiv Detail & Related papers (2024-05-30T09:10:33Z) - ProtoCaps: A Fast and Non-Iterative Capsule Network Routing Method [6.028175460199198]
We introduce a novel, non-iterative routing mechanism for Capsule Networks.
We harness a shared Capsule subspace, negating the need to project each lower-level Capsule to each higher-level Capsule.
Our findings underscore the potential of our proposed methodology in enhancing the operational efficiency and performance of Capsule Networks.
arXiv Detail & Related papers (2023-07-19T12:39:40Z) - PointCaps: Raw Point Cloud Processing using Capsule Networks with
Euclidean Distance Routing [2.916675178729016]
Raw point cloud processing using capsule networks is widely adopted in classification, reconstruction, and segmentation.
Most of the existing capsule based network approaches are computationally heavy and fail at representing the entire point cloud as a single capsule.
We propose PointCaps, a novel convolutional capsule architecture with parameter sharing.
arXiv Detail & Related papers (2021-12-21T14:34:39Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - Efficient-CapsNet: Capsule Network with Self-Attention Routing [0.0]
Deep convolutional neural networks make extensive use of data augmentation techniques and layers with a high number of feature maps to embed object transformations.
capsule networks are a promising solution to extend current convolutional networks and endow artificial visual perception with a process to encode more efficiently all feature affine transformations.
In this paper, we investigate the efficiency of capsule networks and, pushing their capacity to the limits with an extreme architecture with barely 160K parameters, we prove that the proposed architecture is still able to achieve state-of-the-art results.
arXiv Detail & Related papers (2021-01-29T09:56:44Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Wasserstein Routed Capsule Networks [90.16542156512405]
We propose a new parameter efficient capsule architecture, that is able to tackle complex tasks.
We show that our network is able to substantially outperform other capsule approaches by over 1.2 % on CIFAR-10.
arXiv Detail & Related papers (2020-07-22T14:38:05Z) - Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive
Meta-Pruning [83.59005356327103]
A common limitation of most existing pruning techniques is that they require pre-training of the network at least once before pruning.
We propose STAMP, which task-adaptively prunes a network pretrained on a large reference dataset by generating a pruning mask on it as a function of the target dataset.
We validate STAMP against recent advanced pruning methods on benchmark datasets.
arXiv Detail & Related papers (2020-06-22T10:57:43Z) - Progressive Skeletonization: Trimming more fat from a network at
initialization [76.11947969140608]
We propose an objective to find a skeletonized network with maximum connection sensitivity.
We then propose two approximate procedures to maximize our objective.
Our approach provides remarkably improved performance on higher pruning levels.
arXiv Detail & Related papers (2020-06-16T11:32:47Z) - Subspace Capsule Network [85.69796543499021]
SubSpace Capsule Network (SCN) exploits the idea of capsule networks to model possible variations in the appearance or implicitly defined properties of an entity.
SCN can be applied to both discriminative and generative models without incurring computational overhead compared to CNN during test time.
arXiv Detail & Related papers (2020-02-07T17:51:56Z) - No Routing Needed Between Capsules [0.0]
Homogeneous Vector Capsules (HVCs) use element-wise multiplication rather than matrix multiplication.
We show that a simple convolutional neural network using HVCs performs as well as the prior best performing capsule network on MNIST.
arXiv Detail & Related papers (2020-01-24T18:37:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.