Is an encoder within reach?
- URL: http://arxiv.org/abs/2206.01552v1
- Date: Fri, 3 Jun 2022 13:06:22 GMT
- Title: Is an encoder within reach?
- Authors: Helene Hauschultz, Rasmus Berg Palm. Pablo Moreno-Mu\~nos, Nicki
Skafte Detlefsen, Andrew Allan du Plessis, S{\o}ren Hauberg
- Abstract summary: We introduce the idea of using the reach of the manifold spanned by the decoder to determine if an optimal encoder exists for a given dataset and decoder.
We demonstrate that this allows us to determine which observations can be expected to have a unique, and thereby trustworthy, latent representation.
- Score: 3.9548535445908928
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The encoder network of an autoencoder is an approximation of the nearest
point projection onto the manifold spanned by the decoder. A concern with this
approximation is that, while the output of the encoder is always unique, the
projection can possibly have infinitely many values. This implies that the
latent representations learned by the autoencoder can be misleading. Borrowing
from geometric measure theory, we introduce the idea of using the reach of the
manifold spanned by the decoder to determine if an optimal encoder exists for a
given dataset and decoder. We develop a local generalization of this reach and
propose a numerical estimator thereof. We demonstrate that this allows us to
determine which observations can be expected to have a unique, and thereby
trustworthy, latent representation. As our local reach estimator is
differentiable, we investigate its usage as a regularizer and show that this
leads to learned manifolds for which projections are more often unique than
without regularization.
Related papers
- Almost Linear Decoder for Optimal Geometrically Local Quantum Codes [8.837439668920288]
We show how to achieve geometrically local codes that maximize both the dimension and the distance, as well as the energy barrier of the code.
This provides the first decoder for an optimal 3D geometrically local code.
arXiv Detail & Related papers (2024-11-05T09:15:06Z) - Think Twice before Driving: Towards Scalable Decoders for End-to-End
Autonomous Driving [74.28510044056706]
Existing methods usually adopt the decoupled encoder-decoder paradigm.
In this work, we aim to alleviate the problem by two principles.
We first predict a coarse-grained future position and action based on the encoder features.
Then, conditioned on the position and action, the future scene is imagined to check the ramification if we drive accordingly.
arXiv Detail & Related papers (2023-05-10T15:22:02Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - When Counting Meets HMER: Counting-Aware Network for Handwritten
Mathematical Expression Recognition [57.51793420986745]
We propose an unconventional network for handwritten mathematical expression recognition (HMER) named Counting-Aware Network (CAN)
We design a weakly-supervised counting module that can predict the number of each symbol class without the symbol-level position annotations.
Experiments on the benchmark datasets for HMER validate that both joint optimization and counting results are beneficial for correcting the prediction errors of encoder-decoder models.
arXiv Detail & Related papers (2022-07-23T08:39:32Z) - StolenEncoder: Stealing Pre-trained Encoders [62.02156378126672]
We propose the first attack called StolenEncoder to steal pre-trained image encoders.
Our results show that the encoders stolen by StolenEncoder have similar functionality with the target encoders.
arXiv Detail & Related papers (2022-01-15T17:04:38Z) - Dense Coding with Locality Restriction for Decoder: Quantum Encoders vs.
Super-Quantum Encoders [67.12391801199688]
We investigate dense coding by imposing various locality restrictions to our decoder.
In this task, the sender Alice and the receiver Bob share an entangled state.
arXiv Detail & Related papers (2021-09-26T07:29:54Z) - Pulling back information geometry [3.0273878903284266]
We show that we can achieve meaningful latent geometries for a wide range of decoder distributions.
We show that we can achieve meaningful latent geometries for a wide range of decoder distributions.
arXiv Detail & Related papers (2021-06-09T20:16:28Z) - Variational Autoencoder-Based Vehicle Trajectory Prediction with an
Interpretable Latent Space [0.0]
This paper introduces the Descriptive Variational Autoencoder (DVAE), an unsupervised and end-to-end trainable neural network for predicting vehicle trajectories.
The proposed model provides a similar prediction accuracy but with the great advantage of having an interpretable latent space.
arXiv Detail & Related papers (2021-03-25T10:15:53Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.