Variational Capsule Encoder
- URL: http://arxiv.org/abs/2010.09102v1
- Date: Sun, 18 Oct 2020 20:52:16 GMT
- Title: Variational Capsule Encoder
- Authors: Harish RaviPrakash, Syed Muhammad Anwar, Ulas Bagci
- Abstract summary: We propose a novel capsule network based variational encoder architecture, called Bayesian capsules (B-Caps)
We hypothesized that this approach can learn a better representation of features in the latent space than traditional approaches.
Our results indicate the strength of capsule networks in representation learning which has never been examined under the VAE settings.
- Score: 6.244396213953519
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel capsule network based variational encoder architecture,
called Bayesian capsules (B-Caps), to modulate the mean and standard deviation
of the sampling distribution in the latent space. We hypothesized that this
approach can learn a better representation of features in the latent space than
traditional approaches. Our hypothesis was tested by using the learned latent
variables for image reconstruction task, where for MNIST and Fashion-MNIST
datasets, different classes were separated successfully in the latent space
using our proposed model. Our experimental results have shown improved
reconstruction and classification performances for both datasets adding
credence to our hypothesis. We also showed that by increasing the latent space
dimension, the proposed B-Caps was able to learn a better representation when
compared to the traditional variational auto-encoders (VAE). Hence our results
indicate the strength of capsule networks in representation learning which has
never been examined under the VAE settings before.
Related papers
- Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - Vector Quantized Wasserstein Auto-Encoder [57.29764749855623]
We study learning deep discrete representations from the generative viewpoint.
We endow discrete distributions over sequences of codewords and learn a deterministic decoder that transports the distribution over the sequences of codewords to the data distribution.
We develop further theories to connect it with the clustering viewpoint of WS distance, allowing us to have a better and more controllable clustering solution.
arXiv Detail & Related papers (2023-02-12T13:51:36Z) - Trading Information between Latents in Hierarchical Variational
Autoencoders [8.122270502556374]
Variational Autoencoders (VAEs) were originally motivated as probabilistic generative models in which one performs approximate Bayesian inference.
The proposal of $beta$-VAEs breaks this interpretation and generalizes VAEs to application domains beyond generative modeling.
We identify a general class of inference models for which one can split the rate into contributions from each layer, which can then be tuned independently.
arXiv Detail & Related papers (2023-02-09T18:56:11Z) - SAR Image Change Detection Based on Multiscale Capsule Network [33.524488071386415]
Traditional synthetic aperture radar image change detection methods face the challenges of speckle noise and deformation sensitivity.
We propose a Multiscale Capsule Network (Ms-CapsNet) to extract the discriminative information between the changed and unchanged pixels.
The effectiveness of the proposed Ms-CapsNet is verified on three real SAR datasets.
arXiv Detail & Related papers (2022-01-22T01:30:36Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - ASPCNet: A Deep Adaptive Spatial Pattern Capsule Network for
Hyperspectral Image Classification [47.541691093680406]
This paper proposes an adaptive spatial pattern capsule network (ASPCNet) architecture.
It can rotate the sampling location of convolutional kernels on the basis of an enlarged receptive field.
Experiments on three public datasets demonstrate that ASPCNet can yield competitive performance with higher accuracies than state-of-the-art methods.
arXiv Detail & Related papers (2021-04-25T07:10:55Z) - Subspace Capsule Network [85.69796543499021]
SubSpace Capsule Network (SCN) exploits the idea of capsule networks to model possible variations in the appearance or implicitly defined properties of an entity.
SCN can be applied to both discriminative and generative models without incurring computational overhead compared to CNN during test time.
arXiv Detail & Related papers (2020-02-07T17:51:56Z) - On the Replicability of Combining Word Embeddings and Retrieval Models [71.18271398274513]
We replicate recent experiments attempting to demonstrate an attractive hypothesis about the use of the Fisher kernel framework.
Specifically, the hypothesis was that the use of a mixture model of von Mises-Fisher (VMF) distributions would be beneficial because of the focus on cosine distances of both VMF and the vector space model.
arXiv Detail & Related papers (2020-01-13T19:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.