Subspace Capsule Network
- URL: http://arxiv.org/abs/2002.02924v1
- Date: Fri, 7 Feb 2020 17:51:56 GMT
- Title: Subspace Capsule Network
- Authors: Marzieh Edraki, Nazanin Rahnavard, Mubarak Shah
- Abstract summary: SubSpace Capsule Network (SCN) exploits the idea of capsule networks to model possible variations in the appearance or implicitly defined properties of an entity.
SCN can be applied to both discriminative and generative models without incurring computational overhead compared to CNN during test time.
- Score: 85.69796543499021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNNs) have become a key asset to most of
fields in AI. Despite their successful performance, CNNs suffer from a major
drawback. They fail to capture the hierarchy of spatial relation among
different parts of an entity. As a remedy to this problem, the idea of capsules
was proposed by Hinton. In this paper, we propose the SubSpace Capsule Network
(SCN) that exploits the idea of capsule networks to model possible variations
in the appearance or implicitly defined properties of an entity through a group
of capsule subspaces instead of simply grouping neurons to create capsules. A
capsule is created by projecting an input feature vector from a lower layer
onto the capsule subspace using a learnable transformation. This transformation
finds the degree of alignment of the input with the properties modeled by the
capsule subspace. We show that SCN is a general capsule network that can
successfully be applied to both discriminative and generative models without
incurring computational overhead compared to CNN during test time.
Effectiveness of SCN is evaluated through a comprehensive set of experiments on
supervised image classification, semi-supervised image classification and
high-resolution image generation tasks using the generative adversarial network
(GAN) framework. SCN significantly improves the performance of the baseline
models in all 3 tasks.
Related papers
- Hierarchical Object-Centric Learning with Capsule Networks [0.0]
Capsule networks (CapsNets) were introduced to address convolutional neural networks limitations.
This thesis investigates the intriguing aspects of CapsNets and focuses on three key questions to unlock their full potential.
arXiv Detail & Related papers (2024-05-30T09:10:33Z) - Deep multi-prototype capsule networks [0.3823356975862005]
Capsule networks are a type of neural network that identify image parts and form the instantiation parameters of a whole hierarchically.
This paper presents a multi-prototype architecture for guiding capsule networks to represent the variations in the image parts.
The experimental results on MNIST, SVHN, C-Cube, CEDAR, MCYT, and UTSig datasets reveal that the proposed model outperforms others regarding image classification accuracy.
arXiv Detail & Related papers (2024-04-23T18:37:37Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - Learning with Capsules: A Survey [73.31150426300198]
Capsule networks were proposed as an alternative approach to Convolutional Neural Networks (CNNs) for learning object-centric representations.
Unlike CNNs, capsule networks are designed to explicitly model part-whole hierarchical relationships.
arXiv Detail & Related papers (2022-06-06T15:05:36Z) - 3DConvCaps: 3DUnet with Convolutional Capsule Encoder for Medical Image
Segmentation [1.863532786702135]
We propose a 3D encoder-decoder network with Convolutional Capsule (called 3DConvCaps) to learn lower-level features (short-range attention) with convolutional layers.
Our experiments on multiple datasets including iSeg-2017, Hippocampus, and Cardiac demonstrate that our 3D 3DConvCaps network considerably outperforms previous capsule networks and 3D-UNets.
arXiv Detail & Related papers (2022-05-19T03:00:04Z) - HP-Capsule: Unsupervised Face Part Discovery by Hierarchical Parsing
Capsule Network [76.92310948325847]
We propose a Hierarchical Parsing Capsule Network (HP-Capsule) for unsupervised face subpart-part discovery.
HP-Capsule extends the application of capsule networks from digits to human faces and takes a step forward to show how the neural networks understand objects without human intervention.
arXiv Detail & Related papers (2022-03-21T01:39:41Z) - ASPCNet: A Deep Adaptive Spatial Pattern Capsule Network for
Hyperspectral Image Classification [47.541691093680406]
This paper proposes an adaptive spatial pattern capsule network (ASPCNet) architecture.
It can rotate the sampling location of convolutional kernels on the basis of an enlarged receptive field.
Experiments on three public datasets demonstrate that ASPCNet can yield competitive performance with higher accuracies than state-of-the-art methods.
arXiv Detail & Related papers (2021-04-25T07:10:55Z) - Training Deep Capsule Networks with Residual Connections [0.0]
Capsule networks are a type of neural network that have recently gained increased popularity.
They consist of groups of neurons, called capsules, which encode properties of objects or object parts.
Most capsule network implementations use two to three capsule layers, which limits their applicability as expressivity grows exponentially with depth.
We propose a methodology to train deeper capsule networks using residual connections, which is evaluated on four datasets and three different routing algorithms.
Our experimental results show that in fact, performance increases when training deeper capsule networks.
arXiv Detail & Related papers (2021-04-15T11:42:44Z) - Spatial Dependency Networks: Neural Layers for Improved Generative Image
Modeling [79.15521784128102]
We introduce a novel neural network for building image generators (decoders) and apply it to variational autoencoders (VAEs)
In our spatial dependency networks (SDNs), feature maps at each level of a deep neural net are computed in a spatially coherent way.
We show that augmenting the decoder of a hierarchical VAE by spatial dependency layers considerably improves density estimation.
arXiv Detail & Related papers (2021-03-16T07:01:08Z) - Examining the Benefits of Capsule Neural Networks [9.658250977094562]
Capsule networks are a newly developed class of neural networks that potentially address some of the deficiencies with traditional convolutional neural networks.
By replacing the standard scalar activations with vectors, capsule networks aim to be the next great development for computer vision applications.
arXiv Detail & Related papers (2020-01-29T17:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.