Improved Training of Sparse Coding Variational Autoencoder via Weight
Normalization
- URL: http://arxiv.org/abs/2101.09453v1
- Date: Sat, 23 Jan 2021 08:07:20 GMT
- Title: Improved Training of Sparse Coding Variational Autoencoder via Weight
Normalization
- Authors: Linxing Preston Jiang, Luciano de la Iglesia
- Abstract summary: We focus on a recently proposed model, sparse coding variational autoencoder (SVAE)
We show that projection of the filters onto unit norm drastically increases the number of active filters.
Our results highlight the importance of weight normalization for learning sparse representation from data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning a generative model of visual information with sparse and
compositional features has been a challenge for both theoretical neuroscience
and machine learning communities. Sparse coding models have achieved great
success in explaining the receptive fields of mammalian primary visual cortex
with sparsely activated latent representation. In this paper, we focus on a
recently proposed model, sparse coding variational autoencoder (SVAE) (Barello
et al., 2018), and show that the end-to-end training scheme of SVAE leads to a
large group of decoding filters not fully optimized with noise-like receptive
fields. We propose a few heuristics to improve the training of SVAE and show
that a unit $L_2$ norm constraint on the decoder is critical to produce sparse
coding filters. Such normalization can be considered as local lateral
inhibition in the cortex. We verify this claim empirically on both natural
image patches and MNIST dataset and show that projection of the filters onto
unit norm drastically increases the number of active filters. Our results
highlight the importance of weight normalization for learning sparse
representation from data and suggest a new way of reducing the number of
inactive latent components in VAE learning.
Related papers
- Self-STORM: Deep Unrolled Self-Supervised Learning for Super-Resolution Microscopy [55.2480439325792]
We introduce deep unrolled self-supervised learning, which alleviates the need for such data by training a sequence-specific, model-based autoencoder.
Our proposed method exceeds the performance of its supervised counterparts.
arXiv Detail & Related papers (2024-03-25T17:40:32Z) - Variational Bayes image restoration with compressive autoencoders [4.879530644978008]
Regularization of inverse problems is of paramount importance in computational imaging.
In this work, we first propose to use compressive autoencoders instead of state-of-the-art generative models.
As a second contribution, we introduce the Variational Bayes Latent Estimation (VBLE) algorithm.
arXiv Detail & Related papers (2023-11-29T15:49:31Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Vector Quantized Wasserstein Auto-Encoder [57.29764749855623]
We study learning deep discrete representations from the generative viewpoint.
We endow discrete distributions over sequences of codewords and learn a deterministic decoder that transports the distribution over the sequences of codewords to the data distribution.
We develop further theories to connect it with the clustering viewpoint of WS distance, allowing us to have a better and more controllable clustering solution.
arXiv Detail & Related papers (2023-02-12T13:51:36Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Consistency Regularization for Variational Auto-Encoders [14.423556966548544]
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning.
We propose a regularization method to enforce consistency in VAEs.
arXiv Detail & Related papers (2021-05-31T10:26:32Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - A New Modal Autoencoder for Functionally Independent Feature Extraction [6.690183908967779]
A new modal autoencoder (MAE) is proposed by othogonalising the columns of the readout weight matrix.
The results were validated on the MNIST variations and USPS classification benchmark suite.
The new MAE introduces a very simple training principle for autoencoders and could be promising for the pre-training of deep neural networks.
arXiv Detail & Related papers (2020-06-25T13:25:10Z) - Simple and Effective VAE Training with Calibrated Decoders [123.08908889310258]
Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions.
We study the impact of calibrated decoders, which learn the uncertainty of the decoding distribution.
We propose a simple but novel modification to the commonly used Gaussian decoder, which computes the prediction variance analytically.
arXiv Detail & Related papers (2020-06-23T17:57:47Z) - A generic ensemble based deep convolutional neural network for
semi-supervised medical image segmentation [7.141405427125369]
We propose a generic semi-supervised learning framework for image segmentation based on a deep convolutional neural network (DCNN)
Our method is able to significantly improve beyond fully supervised model learning by incorporating unlabeled data.
arXiv Detail & Related papers (2020-04-16T23:41:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.