On a Mechanism Framework of Autoencoders
- URL: http://arxiv.org/abs/2208.06995v1
- Date: Mon, 15 Aug 2022 03:51:40 GMT
- Title: On a Mechanism Framework of Autoencoders
- Authors: Changcun Huang
- Abstract summary: This paper proposes a theoretical framework on the mechanism of autoencoders.
Results of ReLU autoencoders are generalized to some non-ReLU cases.
Compared to PCA and decision trees, the advantages of (generalized) autoencoders on dimensionality reduction and classification are demonstrated.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a theoretical framework on the mechanism of autoencoders.
To the encoder part, under the main use of dimensionality reduction, we
investigate its two fundamental properties: bijective maps and data
disentangling. The general construction methods of an encoder that satisfies
either or both of the above two properties are given. To the decoder part, as a
consequence of the encoder constructions, we present a new basic principle of
the solution, without using affine transforms. The generalization mechanism of
autoencoders is modeled. The results of ReLU autoencoders are generalized to
some non-ReLU cases, particularly for the sigmoid-unit autoencoder. Based on
the theoretical framework above, we explain some experimental results of
variational autoencoders, denoising autoencoders, and linear-unit autoencoders,
with emphasis on the interpretation of the lower-dimensional representation of
data via encoders; and the mechanism of image restoration through autoencoders
is natural to be understood by those explanations. Compared to PCA and decision
trees, the advantages of (generalized) autoencoders on dimensionality reduction
and classification are demonstrated, respectively. Convolutional neural
networks and randomly weighted neural networks are also interpreted by this
framework.
Related papers
- $ε$-VAE: Denoising as Visual Decoding [61.29255979767292]
In generative modeling, tokenization simplifies complex data into compact, structured representations, creating a more efficient, learnable space.
Current visual tokenization methods rely on a traditional autoencoder framework, where the encoder compresses data into latent representations, and the decoder reconstructs the original input.
We propose denoising as decoding, shifting from single-step reconstruction to iterative refinement. Specifically, we replace the decoder with a diffusion process that iteratively refines noise to recover the original image, guided by the latents provided by the encoder.
We evaluate our approach by assessing both reconstruction (rFID) and generation quality (
arXiv Detail & Related papers (2024-10-05T08:27:53Z) - Generalization Bounds for Neural Belief Propagation Decoders [10.96453955114324]
In this paper, we investigate the generalization capabilities of neural network based decoders.
Specifically, the generalization gap of a decoder is the difference between empirical and expected bit-error-rate(s)
Results are presented for both regular and irregular parity-check matrices.
arXiv Detail & Related papers (2023-05-17T19:56:04Z) - Decoder-Only or Encoder-Decoder? Interpreting Language Model as a
Regularized Encoder-Decoder [75.03283861464365]
The seq2seq task aims at generating the target sequence based on the given input source sequence.
Traditionally, most of the seq2seq task is resolved by an encoder to encode the source sequence and a decoder to generate the target text.
Recently, a bunch of new approaches have emerged that apply decoder-only language models directly to the seq2seq task.
arXiv Detail & Related papers (2023-04-08T15:44:29Z) - Fundamental Limits of Two-layer Autoencoders, and Achieving Them with
Gradient Methods [91.54785981649228]
This paper focuses on non-linear two-layer autoencoders trained in the challenging proportional regime.
Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods.
For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders.
arXiv Detail & Related papers (2022-12-27T12:37:34Z) - Disentangling Autoencoders (DAE) [0.0]
We propose a novel framework for autoencoders based on the principles of symmetry transformations in group-theory.
We believe that this model leads a new field for disentanglement learning based on autoencoders without regularizers.
arXiv Detail & Related papers (2022-02-20T22:59:13Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Dynamic Neural Representational Decoders for High-Resolution Semantic
Segmentation [98.05643473345474]
We propose a novel decoder, termed dynamic neural representational decoder (NRD)
As each location on the encoder's output corresponds to a local patch of the semantic labels, in this work, we represent these local patches of labels with compact neural networks.
This neural representation enables our decoder to leverage the smoothness prior in the semantic label space, and thus makes our decoder more efficient.
arXiv Detail & Related papers (2021-07-30T04:50:56Z) - Cascade Decoders-Based Autoencoders for Image Reconstruction [2.924868086534434]
This paper aims for image reconstruction of autoencoders, employs cascade decoders-based autoencoders.
The proposed serial decoders-based autoencoders include the architectures of multi-level decoders and the related optimization algorithms.
It is evaluated by the experimental results that the proposed autoencoders outperform the classical autoencoders in the performance of image reconstruction.
arXiv Detail & Related papers (2021-06-29T23:40:54Z) - A New Modal Autoencoder for Functionally Independent Feature Extraction [6.690183908967779]
A new modal autoencoder (MAE) is proposed by othogonalising the columns of the readout weight matrix.
The results were validated on the MNIST variations and USPS classification benchmark suite.
The new MAE introduces a very simple training principle for autoencoders and could be promising for the pre-training of deep neural networks.
arXiv Detail & Related papers (2020-06-25T13:25:10Z) - Learning Autoencoders with Relational Regularization [89.53065887608088]
A new framework is proposed for learning autoencoders of data distributions.
We minimize the discrepancy between the model and target distributions, with a emphrelational regularization
We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders.
arXiv Detail & Related papers (2020-02-07T17:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.