Supervising the Decoder of Variational Autoencoders to Improve
Scientific Utility
- URL: http://arxiv.org/abs/2109.04561v1
- Date: Thu, 9 Sep 2021 20:55:38 GMT
- Title: Supervising the Decoder of Variational Autoencoders to Improve
Scientific Utility
- Authors: Liyun Tu, Austin Talbot, Neil Gallagher, David Carlson
- Abstract summary: Probabilistic generative models are attractive for scientific modeling because their inferred parameters can be used to generate hypotheses and design experiments.
Supervised Variational Autoencoders (SVAEs) have previously been used for this purpose.
We develop a second order supervision framework (SOS-VAE) that influences the decoder to induce a predictive latent representation.
- Score: 1.014927488137914
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Probabilistic generative models are attractive for scientific modeling
because their inferred parameters can be used to generate hypotheses and design
experiments. This requires that the learned model provide an accurate
representation of the input data and yield a latent space that effectively
predicts outcomes relevant to the scientific question. Supervised Variational
Autoencoders (SVAEs) have previously been used for this purpose, where a
carefully designed decoder can be used as an interpretable generative model
while the supervised objective ensures a predictive latent representation.
Unfortunately, the supervised objective forces the encoder to learn a biased
approximation to the generative posterior distribution, which renders the
generative parameters unreliable when used in scientific models. This issue has
remained undetected as reconstruction losses commonly used to evaluate model
performance do not detect bias in the encoder. We address this
previously-unreported issue by developing a second order supervision framework
(SOS-VAE) that influences the decoder to induce a predictive latent
representation. This ensures that the associated encoder maintains a reliable
generative interpretation. We extend this technique to allow the user to
trade-off some bias in the generative parameters for improved predictive
performance, acting as an intermediate option between SVAEs and our new
SOS-VAE. We also use this methodology to address missing data issues that often
arise when combining recordings from multiple scientific experiments. We
demonstrate the effectiveness of these developments using synthetic data and
electrophysiological recordings with an emphasis on how our learned
representations can be used to design scientific experiments.
Related papers
- DINAMO: Dynamic and INterpretable Anomaly MOnitoring for Large-Scale Particle Physics Experiments [0.0]
We present novel, interpretable, robust, and scalable DQM algorithms designed to automate anomaly detection.
Our approach constructs evolving histogram templates with built-in uncertainties, featuring a statistical variant.
Experiments on synthetic datasets demonstrate the high accuracy, adaptability, and interpretability of these methods.
arXiv Detail & Related papers (2025-01-31T15:51:41Z) - Geometry-Preserving Encoder/Decoder in Latent Generative Models [13.703752179071333]
We introduce a novel encoder/decoder framework with theoretical properties distinct from those of the VAE.
We demonstrate the significant advantages of this geometry-preserving encoder in the training process of both the encoder and decoder.
arXiv Detail & Related papers (2025-01-16T23:14:34Z) - An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - Protect Before Generate: Error Correcting Codes within Discrete Deep Generative Models [3.053842954605396]
We introduce a novel method that enhances variational inference in discrete latent variable models.
We leverage Error Correcting Codes (ECCs) to introduce redundancy in the latent representations.
This redundancy is then exploited by the variational posterior to yield more accurate estimates.
arXiv Detail & Related papers (2024-10-10T11:59:58Z) - Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv Detail & Related papers (2023-10-04T07:14:43Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.