A New Modal Autoencoder for Functionally Independent Feature Extraction
- URL: http://arxiv.org/abs/2006.14390v1
- Date: Thu, 25 Jun 2020 13:25:10 GMT
- Title: A New Modal Autoencoder for Functionally Independent Feature Extraction
- Authors: Yuzhu Guo, Kang Pan, Simeng Li, Zongchang Han, Kexin Wang and Li Li
- Abstract summary: A new modal autoencoder (MAE) is proposed by othogonalising the columns of the readout weight matrix.
The results were validated on the MNIST variations and USPS classification benchmark suite.
The new MAE introduces a very simple training principle for autoencoders and could be promising for the pre-training of deep neural networks.
- Score: 6.690183908967779
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autoencoders have been widely used for dimensional reduction and feature
extraction. Various types of autoencoders have been proposed by introducing
regularization terms. Most of these regularizations improve representation
learning by constraining the weights in the encoder part, which maps input into
hidden nodes and affects the generation of features. In this study, we show
that a constraint to the decoder can also significantly improve its performance
because the decoder determines how the latent variables contribute to the
reconstruction of input. Inspired by the structural modal analysis method in
mechanical engineering, a new modal autoencoder (MAE) is proposed by
othogonalising the columns of the readout weight matrix. The new regularization
helps to disentangle explanatory factors of variation and forces the MAE to
extract fundamental modes in data. The learned representations are functionally
independent in the reconstruction of input and perform better in consecutive
classification tasks. The results were validated on the MNIST variations and
USPS classification benchmark suite. Comparative experiments clearly show that
the new algorithm has a surprising advantage. The new MAE introduces a very
simple training principle for autoencoders and could be promising for the
pre-training of deep neural networks.
Related papers
- Linear Recursive Feature Machines provably recover low-rank matrices [17.530511273384786]
We develop the first theoretical guarantees for how RFM performs dimensionality reduction.
We generalize the Iteratively Reweighted Least Squares (IRLS) algorithm.
Our results shed light on the connection between feature learning in neural networks and classical sparse recovery algorithms.
arXiv Detail & Related papers (2024-01-09T13:44:12Z) - Regress Before Construct: Regress Autoencoder for Point Cloud
Self-supervised Learning [18.10704604275133]
Masked Autoencoders (MAE) have demonstrated promising performance in self-supervised learning for 2D and 3D computer vision.
We propose Point Regress AutoEncoder (Point-RAE), a new scheme for regressive autoencoders for point cloud self-supervised learning.
Our approach is efficient during pre-training and generalizes well on various downstream tasks.
arXiv Detail & Related papers (2023-09-25T17:23:33Z) - Uncovering mesa-optimization algorithms in Transformers [61.06055590704677]
Some autoregressive models can learn as an input sequence is processed, without undergoing any parameter changes, and without being explicitly trained to do so.
We show that standard next-token prediction error minimization gives rise to a subsidiary learning algorithm that adjusts the model as new inputs are revealed.
Our findings explain in-context learning as a product of autoregressive loss minimization and inform the design of new optimization-based Transformer layers.
arXiv Detail & Related papers (2023-09-11T22:42:50Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Fundamental Limits of Two-layer Autoencoders, and Achieving Them with
Gradient Methods [91.54785981649228]
This paper focuses on non-linear two-layer autoencoders trained in the challenging proportional regime.
Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods.
For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders.
arXiv Detail & Related papers (2022-12-27T12:37:34Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - Consistency Regularization for Variational Auto-Encoders [14.423556966548544]
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning.
We propose a regularization method to enforce consistency in VAEs.
arXiv Detail & Related papers (2021-05-31T10:26:32Z) - Improved Training of Sparse Coding Variational Autoencoder via Weight
Normalization [0.0]
We focus on a recently proposed model, sparse coding variational autoencoder (SVAE)
We show that projection of the filters onto unit norm drastically increases the number of active filters.
Our results highlight the importance of weight normalization for learning sparse representation from data.
arXiv Detail & Related papers (2021-01-23T08:07:20Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Simple and Effective VAE Training with Calibrated Decoders [123.08908889310258]
Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions.
We study the impact of calibrated decoders, which learn the uncertainty of the decoding distribution.
We propose a simple but novel modification to the commonly used Gaussian decoder, which computes the prediction variance analytically.
arXiv Detail & Related papers (2020-06-23T17:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.