Wasserstein Routed Capsule Networks
- URL: http://arxiv.org/abs/2007.11465v1
- Date: Wed, 22 Jul 2020 14:38:05 GMT
- Title: Wasserstein Routed Capsule Networks
- Authors: Alexander Fuchs, Franz Pernkopf
- Abstract summary: We propose a new parameter efficient capsule architecture, that is able to tackle complex tasks.
We show that our network is able to substantially outperform other capsule approaches by over 1.2 % on CIFAR-10.
- Score: 90.16542156512405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Capsule networks offer interesting properties and provide an alternative to
today's deep neural network architectures. However, recent approaches have
failed to consistently achieve competitive results across different image
datasets. We propose a new parameter efficient capsule architecture, that is
able to tackle complex tasks by using neural networks trained with an
approximate Wasserstein objective to dynamically select capsules throughout the
entire architecture. This approach focuses on implementing a robust routing
scheme, which can deliver improved results using little overhead. We perform
several ablation studies verifying the proposed concepts and show that our
network is able to substantially outperform other capsule approaches by over
1.2 % on CIFAR-10, using fewer parameters.
Related papers
- Efficient and Accurate Hyperspectral Image Demosaicing with Neural Network Architectures [3.386560551295746]
This study investigates the effectiveness of neural network architectures in hyperspectral image demosaicing.
We introduce a range of network models and modifications, and compare them with classical methods and existing reference network approaches.
Results indicate that our networks outperform or match reference models in both datasets demonstrating exceptional performance.
arXiv Detail & Related papers (2023-12-21T08:02:49Z) - ProtoCaps: A Fast and Non-Iterative Capsule Network Routing Method [6.028175460199198]
We introduce a novel, non-iterative routing mechanism for Capsule Networks.
We harness a shared Capsule subspace, negating the need to project each lower-level Capsule to each higher-level Capsule.
Our findings underscore the potential of our proposed methodology in enhancing the operational efficiency and performance of Capsule Networks.
arXiv Detail & Related papers (2023-07-19T12:39:40Z) - Pruning-as-Search: Efficient Neural Architecture Search via Channel
Pruning and Structural Reparameterization [50.50023451369742]
Pruning-as-Search (PaS) is an end-to-end channel pruning method to search out desired sub-network automatically and efficiently.
Our proposed architecture outperforms prior arts by around $1.0%$ top-1 accuracy on ImageNet-1000 classification task.
arXiv Detail & Related papers (2022-06-02T17:58:54Z) - SIRe-Networks: Skip Connections over Interlaced Multi-Task Learning and
Residual Connections for Structure Preserving Object Classification [28.02302915971059]
In this paper, we introduce an interlaced multi-task learning strategy, defined SIRe, to reduce the vanishing gradient in relation to the object classification task.
The presented methodology directly improves a convolutional neural network (CNN) by enforcing the input image structure preservation through auto-encoders.
To validate the presented methodology, a simple CNN and various implementations of famous networks are extended via the SIRe strategy and extensively tested on the CIFAR100 dataset.
arXiv Detail & Related papers (2021-10-06T13:54:49Z) - DAAS: Differentiable Architecture and Augmentation Policy Search [107.53318939844422]
This work considers the possible coupling between neural architectures and data augmentation and proposes an effective algorithm jointly searching for them.
Our approach achieves 97.91% accuracy on CIFAR-10 and 76.6% Top-1 accuracy on ImageNet dataset, showing the outstanding performance of our search algorithm.
arXiv Detail & Related papers (2021-09-30T17:15:17Z) - D-DARTS: Distributed Differentiable Architecture Search [75.12821786565318]
Differentiable ARchiTecture Search (DARTS) is one of the most trending Neural Architecture Search (NAS) methods.
We propose D-DARTS, a novel solution that addresses this problem by nesting several neural networks at cell-level.
arXiv Detail & Related papers (2021-08-20T09:07:01Z) - Efficient-CapsNet: Capsule Network with Self-Attention Routing [0.0]
Deep convolutional neural networks make extensive use of data augmentation techniques and layers with a high number of feature maps to embed object transformations.
capsule networks are a promising solution to extend current convolutional networks and endow artificial visual perception with a process to encode more efficiently all feature affine transformations.
In this paper, we investigate the efficiency of capsule networks and, pushing their capacity to the limits with an extreme architecture with barely 160K parameters, we prove that the proposed architecture is still able to achieve state-of-the-art results.
arXiv Detail & Related papers (2021-01-29T09:56:44Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - Neural Inheritance Relation Guided One-Shot Layer Assignment Search [44.82474044430184]
We investigate the impact of different layer assignments to the network performance by building an architecture dataset of layer assignment on CIFAR-100.
We find a neural inheritance relation among the networks with different layer assignments, that is, the optimal layer assignments for deeper networks always inherit from those for shallow networks.
Inspired by this neural inheritance relation, we propose an efficient one-shot layer assignment search approach via inherited sampling.
arXiv Detail & Related papers (2020-02-28T07:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.