LE-CapsNet: A Light and Enhanced Capsule Network
- URL: http://arxiv.org/abs/2511.11708v1
- Date: Wed, 12 Nov 2025 15:45:48 GMT
- Title: LE-CapsNet: A Light and Enhanced Capsule Network
- Authors: Pouya Shiri, Amirali Baniasadi,
- Abstract summary: Capsule Network (CapsNet) has several advantages over CNNs.<n>CapsNet is slow due to its different structure.<n>We propose LE-CapsNet as a light, enhanced and more accurate variant of CapsNet.
- Score: 0.07161783472741746
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Capsule Network (CapsNet) classifier has several advantages over CNNs, including better detection of images containing overlapping categories and higher accuracy on transformed images. Despite the advantages, CapsNet is slow due to its different structure. In addition, CapsNet is resource-hungry, includes many parameters and lags in accuracy compared to CNNs. In this work, we propose LE-CapsNet as a light, enhanced and more accurate variant of CapsNet. Using 3.8M weights, LECapsNet obtains 76.73% accuracy on the CIFAR-10 dataset while performing inference 4x faster than CapsNet. In addition, our proposed network is more robust at detecting images with affine transformations compared to CapsNet. We achieve 94.3% accuracy on the AffNIST dataset (compared to CapsNet 90.52%).
Related papers
- PrunedCaps: A Case For Primary Capsules Discrimination [0.06372261626436675]
We show that a pruned version of CapsNet performs up to 9.90 times faster than the conventional architecture.<n>Our pruned architecture saves on more than 95.36 percent of floating-point operations in the dynamic routing stage of the architecture.
arXiv Detail & Related papers (2025-12-02T04:31:58Z) - DL-CapsNet: A Deep and Light Capsule Network [0.07161783472741746]
We propose a deep variant of CapsNet consisting of several capsule layers.<n>DL-CapsNet, while being highly accurate, employs a small number of parameters and delivers faster training and inference.
arXiv Detail & Related papers (2025-11-23T05:45:11Z) - Convolutional Fully-Connected Capsule Network (CFC-CapsNet): A Novel and Fast Capsule Network [0.07161783472741746]
We introduce Convolutional Fully-Connected Capsule Network (CFC-CapsNet) to address the shortcomings of CapsNet.<n>CFC-CapsNet produces fewer, yet more powerful capsules resulting in higher network accuracy.<n>Our experiments show that CFC-CapsNet achieves competitive accuracy, faster training and inference.
arXiv Detail & Related papers (2025-11-06T19:27:15Z) - Quick-CapsNet (QCN): A fast alternative to Capsule Networks [0.06372261626436675]
We introduce Quick-CapsNet (QCN) as a fast alternative to CapsNet.<n>QCN builds on producing a fewer number of capsules, which results in a faster network.<n>Inference is 5x faster on MNIST, F-MNIST, SVHN and Cifar-10 datasets.
arXiv Detail & Related papers (2025-10-08T22:41:28Z) - VeCLIP: Improving CLIP Training via Visual-enriched Captions [63.547204530720705]
This study introduces a scalable pipeline for noisy caption rewriting.
We emphasize the incorporation of visual concepts into captions, termed as Visual-enriched Captions (VeCap)
We showcase the adaptation of this method for training CLIP on large-scale web-crawled datasets, termed VeCLIP.
arXiv Detail & Related papers (2023-10-11T17:49:13Z) - Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness
with Dataset Reinforcement [68.44100784364987]
We propose a strategy to improve a dataset once such that the accuracy of any model architecture trained on the reinforced dataset is improved at no additional training cost for users.
We create a reinforced version of the ImageNet training dataset, called ImageNet+, as well as reinforced datasets CIFAR-100+, Flowers-102+, and Food-101+.
Models trained with ImageNet+ are more accurate, robust, and calibrated, and transfer well to downstream tasks.
arXiv Detail & Related papers (2023-03-15T23:10:17Z) - MogaNet: Multi-order Gated Aggregation Network [61.842116053929736]
We propose a new family of modern ConvNets, dubbed MogaNet, for discriminative visual representation learning.<n>MogaNet encapsulates conceptually simple yet effective convolutions and gated aggregation into a compact module.<n>MogaNet exhibits great scalability, impressive efficiency of parameters, and competitive performance compared to state-of-the-art ViTs and ConvNets on ImageNet.
arXiv Detail & Related papers (2022-11-07T04:31:17Z) - Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs [148.0476219278875]
We revisit large kernel design in modern convolutional neural networks (CNNs)
Inspired by recent advances of vision transformers (ViTs), in this paper, we demonstrate that using a few large convolutional kernels instead of a stack of small kernels could be a more powerful paradigm.
We propose RepLKNet, a pure CNN architecture whose kernel size is as large as 31x31, in contrast to commonly used 3x3.
arXiv Detail & Related papers (2022-03-13T17:22:44Z) - Parallel Capsule Networks for Classification of White Blood Cells [1.5749416770494706]
Capsule Networks (CapsNets) is a machine learning architecture proposed to overcome some of the shortcomings of convolutional neural networks (CNNs)
We present a new architecture, parallel CapsNets, which exploits the concept of branching the network to isolate certain capsules.
arXiv Detail & Related papers (2021-08-05T14:30:44Z) - Interpretable Graph Capsule Networks for Object Recognition [17.62514568986647]
We propose interpretable Graph Capsule Networks (GraCapsNets), where we replace the routing part with a multi-head attention-based Graph Pooling approach.
GraCapsNets achieve better classification performance with fewer parameters and better adversarial robustness, when compared to CapsNets.
arXiv Detail & Related papers (2020-12-03T03:18:00Z) - iCapsNets: Towards Interpretable Capsule Networks for Text
Classification [95.31786902390438]
Traditional machine learning methods are easy to interpret but have low accuracies.
We propose interpretable capsule networks (iCapsNets) to bridge this gap.
iCapsNets can be interpreted both locally and globally.
arXiv Detail & Related papers (2020-05-16T04:11:44Z) - Q-CapsNets: A Specialized Framework for Quantizing Capsule Networks [12.022910298030219]
Capsule Networks (CapsNets) have superior learning capabilities in machine learning tasks, like image classification, compared to the traditional CNNs.
CapsNets require extremely intense computations and are difficult to be deployed in their original form at the resource-constrained edge devices.
This paper makes the first attempt to quantize CapsNet models, to enable their efficient edge implementations, by developing a specialized quantization framework for CapsNets.
arXiv Detail & Related papers (2020-04-15T14:32:45Z) - Improved Residual Networks for Image and Video Recognition [98.10703825716142]
Residual networks (ResNets) represent a powerful type of convolutional neural network (CNN) architecture.
We show consistent improvements in accuracy and learning convergence over the baseline.
Our proposed approach allows us to train extremely deep networks, while the baseline shows severe optimization issues.
arXiv Detail & Related papers (2020-04-10T11:09:50Z) - Fixing the train-test resolution discrepancy: FixEfficientNet [98.64315617109344]
This paper provides an analysis of the performance of the EfficientNet image classifiers with several recent training procedures.
The resulting network, called FixEfficientNet, significantly outperforms the initial architecture with the same number of parameters.
arXiv Detail & Related papers (2020-03-18T14:22:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.