Supervising the Decoder of Variational Autoencoders to Improve
Scientific Utility
- URL: http://arxiv.org/abs/2109.04561v1
- Date: Thu, 9 Sep 2021 20:55:38 GMT
- Title: Supervising the Decoder of Variational Autoencoders to Improve
Scientific Utility
- Authors: Liyun Tu, Austin Talbot, Neil Gallagher, David Carlson
- Abstract summary: Probabilistic generative models are attractive for scientific modeling because their inferred parameters can be used to generate hypotheses and design experiments.
Supervised Variational Autoencoders (SVAEs) have previously been used for this purpose.
We develop a second order supervision framework (SOS-VAE) that influences the decoder to induce a predictive latent representation.
- Score: 1.014927488137914
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Probabilistic generative models are attractive for scientific modeling
because their inferred parameters can be used to generate hypotheses and design
experiments. This requires that the learned model provide an accurate
representation of the input data and yield a latent space that effectively
predicts outcomes relevant to the scientific question. Supervised Variational
Autoencoders (SVAEs) have previously been used for this purpose, where a
carefully designed decoder can be used as an interpretable generative model
while the supervised objective ensures a predictive latent representation.
Unfortunately, the supervised objective forces the encoder to learn a biased
approximation to the generative posterior distribution, which renders the
generative parameters unreliable when used in scientific models. This issue has
remained undetected as reconstruction losses commonly used to evaluate model
performance do not detect bias in the encoder. We address this
previously-unreported issue by developing a second order supervision framework
(SOS-VAE) that influences the decoder to induce a predictive latent
representation. This ensures that the associated encoder maintains a reliable
generative interpretation. We extend this technique to allow the user to
trade-off some bias in the generative parameters for improved predictive
performance, acting as an intermediate option between SVAEs and our new
SOS-VAE. We also use this methodology to address missing data issues that often
arise when combining recordings from multiple scientific experiments. We
demonstrate the effectiveness of these developments using synthetic data and
electrophysiological recordings with an emphasis on how our learned
representations can be used to design scientific experiments.
Related papers
- Predictive variational autoencoder for learning robust representations
of time-series data [0.0]
We propose a VAE architecture that predicts the next point in time and show that it mitigates the learning of spurious features.
We show that together these two constraints on VAEs to be smooth over time produce robust latent representations and faithfully recover latent factors on synthetic datasets.
arXiv Detail & Related papers (2023-12-12T02:06:50Z) - Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv Detail & Related papers (2023-10-04T07:14:43Z) - Koopman Invertible Autoencoder: Leveraging Forward and Backward Dynamics
for Temporal Modeling [13.38194491846739]
We propose a novel machine learning model based on Koopman operator theory, which we call Koopman Invertible Autoencoders (KIA)
KIA captures the inherent characteristic of the system by modeling both forward and backward dynamics in the infinite-dimensional Hilbert space.
This enables us to efficiently learn low-dimensional representations, resulting in more accurate predictions of long-term system behavior.
arXiv Detail & Related papers (2023-09-19T03:42:55Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Entropy optimized semi-supervised decomposed vector-quantized
variational autoencoder model based on transfer learning for multiclass text
classification and generation [3.9318191265352196]
We propose a semisupervised discrete latent variable model for multi-class text classification and text generation.
The proposed model employs the concept of transfer learning for training a quantized transformer model.
Experimental results indicate that the proposed model has surpassed the state-of-the-art models remarkably.
arXiv Detail & Related papers (2021-11-10T07:07:54Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.