DL-CapsNet: A Deep and Light Capsule Network
- URL: http://arxiv.org/abs/2512.00061v1
- Date: Sun, 23 Nov 2025 05:45:11 GMT
- Title: DL-CapsNet: A Deep and Light Capsule Network
- Authors: Pouya Shiri, Amirali Baniasadi,
- Abstract summary: We propose a deep variant of CapsNet consisting of several capsule layers.<n>DL-CapsNet, while being highly accurate, employs a small number of parameters and delivers faster training and inference.
- Score: 0.07161783472741746
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Capsule Network (CapsNet) is among the promising classifiers and a possible successor of the classifiers built based on Convolutional Neural Network (CNN). CapsNet is more accurate than CNNs in detecting images with overlapping categories and those with applied affine transformations. In this work, we propose a deep variant of CapsNet consisting of several capsule layers. In addition, we design the Capsule Summarization layer to reduce the complexity by reducing the number of parameters. DL-CapsNet, while being highly accurate, employs a small number of parameters and delivers faster training and inference. DL-CapsNet can process complex datasets with a high number of categories.
Related papers
- PrunedCaps: A Case For Primary Capsules Discrimination [0.06372261626436675]
We show that a pruned version of CapsNet performs up to 9.90 times faster than the conventional architecture.<n>Our pruned architecture saves on more than 95.36 percent of floating-point operations in the dynamic routing stage of the architecture.
arXiv Detail & Related papers (2025-12-02T04:31:58Z) - Convolutional Fully-Connected Capsule Network (CFC-CapsNet): A Novel and Fast Capsule Network [0.07161783472741746]
We introduce Convolutional Fully-Connected Capsule Network (CFC-CapsNet) to address the shortcomings of CapsNet.<n>CFC-CapsNet produces fewer, yet more powerful capsules resulting in higher network accuracy.<n>Our experiments show that CFC-CapsNet achieves competitive accuracy, faster training and inference.
arXiv Detail & Related papers (2025-11-06T19:27:15Z) - Quick-CapsNet (QCN): A fast alternative to Capsule Networks [0.06372261626436675]
We introduce Quick-CapsNet (QCN) as a fast alternative to CapsNet.<n>QCN builds on producing a fewer number of capsules, which results in a faster network.<n>Inference is 5x faster on MNIST, F-MNIST, SVHN and Cifar-10 datasets.
arXiv Detail & Related papers (2025-10-08T22:41:28Z) - RobCaps: Evaluating the Robustness of Capsule Networks against Affine
Transformations and Adversarial Attacks [11.302789770501303]
Capsule Networks (CapsNets) are able to hierarchically preserve the pose relationships between multiple objects for image classification tasks.
In this paper, we evaluate different factors affecting the robustness of CapsNets, compared to traditional Conal Neural Networks (CNNs)
arXiv Detail & Related papers (2023-04-08T09:58:35Z) - ASPCNet: A Deep Adaptive Spatial Pattern Capsule Network for
Hyperspectral Image Classification [47.541691093680406]
This paper proposes an adaptive spatial pattern capsule network (ASPCNet) architecture.
It can rotate the sampling location of convolutional kernels on the basis of an enlarged receptive field.
Experiments on three public datasets demonstrate that ASPCNet can yield competitive performance with higher accuracies than state-of-the-art methods.
arXiv Detail & Related papers (2021-04-25T07:10:55Z) - Scalable Visual Transformers with Hierarchical Pooling [61.05787583247392]
We propose a Hierarchical Visual Transformer (HVT) which progressively pools visual tokens to shrink the sequence length.
It brings a great benefit by scaling dimensions of depth/width/resolution/patch size without introducing extra computational complexity.
Our HVT outperforms the competitive baselines on ImageNet and CIFAR-100 datasets.
arXiv Detail & Related papers (2021-03-19T03:55:58Z) - Interpretable Graph Capsule Networks for Object Recognition [17.62514568986647]
We propose interpretable Graph Capsule Networks (GraCapsNets), where we replace the routing part with a multi-head attention-based Graph Pooling approach.
GraCapsNets achieve better classification performance with fewer parameters and better adversarial robustness, when compared to CapsNets.
arXiv Detail & Related papers (2020-12-03T03:18:00Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - PSConv: Squeezing Feature Pyramid into One Compact Poly-Scale
Convolutional Layer [76.44375136492827]
Convolutional Neural Networks (CNNs) are often scale-sensitive.
We bridge this regret by exploiting multi-scale features in a finer granularity.
The proposed convolution operation, named Poly-Scale Convolution (PSConv), mixes up a spectrum of dilation rates.
arXiv Detail & Related papers (2020-07-13T05:14:11Z) - Subspace Capsule Network [85.69796543499021]
SubSpace Capsule Network (SCN) exploits the idea of capsule networks to model possible variations in the appearance or implicitly defined properties of an entity.
SCN can be applied to both discriminative and generative models without incurring computational overhead compared to CNN during test time.
arXiv Detail & Related papers (2020-02-07T17:51:56Z) - Convolutional Networks with Dense Connectivity [59.30634544498946]
We introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion.
For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers.
We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks.
arXiv Detail & Related papers (2020-01-08T06:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.