DE-VAE: Revealing Uncertainty in Parametric and Inverse Projections with Variational Autoencoders using Differential Entropy
- URL: http://arxiv.org/abs/2508.12145v3
- Date: Fri, 12 Sep 2025 12:37:11 GMT
- Title: DE-VAE: Revealing Uncertainty in Parametric and Inverse Projections with Variational Autoencoders using Differential Entropy
- Authors: Frederik L. Dennig, Daniel A. Keim,
- Abstract summary: We propose DE-VAE, an uncertainty-aware variational AE to improve the learned parametric and invertible projections.<n>Given a fixed projection, we train DE-VAE to learn a mapping into 2D space and an inverse mapping back to the original space.<n>Our findings show that DE-VAE can create parametric and inverse projections with comparable accuracy to other current AE-based approaches.
- Score: 7.23728913278294
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, autoencoders (AEs) have gained interest for creating parametric and invertible projections of multidimensional data. Parametric projections make it possible to embed new, unseen samples without recalculating the entire projection, while invertible projections allow the synthesis of new data instances. However, existing methods perform poorly when dealing with out-of-distribution samples in either the data or embedding space. Thus, we propose DE-VAE, an uncertainty-aware variational AE using differential entropy (DE) to improve the learned parametric and invertible projections. Given a fixed projection, we train DE-VAE to learn a mapping into 2D space and an inverse mapping back to the original space. We conduct quantitative and qualitative evaluations on four well-known datasets, using UMAP and t-SNE as baseline projection methods. Our findings show that DE-VAE can create parametric and inverse projections with comparable accuracy to other current AE-based approaches while enabling the analysis of embedding uncertainty.
Related papers
- Evaluating Autoencoders for Parametric and Invertible Multidimensional Projections [5.605792949080119]
We evaluate three autoencoder architectures for creating parametric and invertible projections.<n>Based on a given projection, we train AEs to learn a mapping into 2D space and an inverse mapping into the original space.<n>Our results indicate that AEs with a customized loss function can create smoother parametric and inverse projections than feed-forward neural networks.
arXiv Detail & Related papers (2025-04-23T15:47:20Z) - Nonparametric Automatic Differentiation Variational Inference with
Spline Approximation [7.5620760132717795]
We develop a nonparametric approximation approach that enables flexible posterior approximation for distributions with complicated structures.
Compared with widely-used nonparametrical inference methods, the proposed method is easy to implement and adaptive to various data structures.
Experiments demonstrate the efficiency of the proposed method in approximating complex posterior distributions and improving the performance of generative models with incomplete data.
arXiv Detail & Related papers (2024-03-10T20:22:06Z) - Automatic Parameterization for Aerodynamic Shape Optimization via Deep
Geometric Learning [60.69217130006758]
We propose two deep learning models that fully automate shape parameterization for aerodynamic shape optimization.
Both models are optimized to parameterize via deep geometric learning to embed human prior knowledge into learned geometric patterns.
We perform shape optimization experiments on 2D airfoils and discuss the applicable scenarios for the two models.
arXiv Detail & Related papers (2023-05-03T13:45:40Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Oracle-Preserving Latent Flows [58.720142291102135]
We develop a methodology for the simultaneous discovery of multiple nontrivial continuous symmetries across an entire labelled dataset.
The symmetry transformations and the corresponding generators are modeled with fully connected neural networks trained with a specially constructed loss function.
The two new elements in this work are the use of a reduced-dimensionality latent space and the generalization to transformations invariant with respect to high-dimensional oracles.
arXiv Detail & Related papers (2023-02-02T00:13:32Z) - DPVIm: Differentially Private Variational Inference Improved [13.761202518891329]
Differentially private (DP) release of multidimensional statistics typically considers an aggregate sensitivity.
Different dimensions of that vector might have widely different magnitudes and therefore DP perturbation disproportionately affects the signal across dimensions.
We observe this problem in the gradient release of the DP-SGD algorithm when using it for variational inference (VI)
arXiv Detail & Related papers (2022-10-28T07:41:32Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Quantitative Understanding of VAE as a Non-linearly Scaled Isometric
Embedding [52.48298164494608]
Variational autoencoder (VAE) estimates the posterior parameters of latent variables corresponding to each input data.
This paper provides a quantitative understanding of VAE property through the differential geometric and information-theoretic interpretations of VAE.
arXiv Detail & Related papers (2020-07-30T02:37:46Z) - Two-Dimensional Semi-Nonnegative Matrix Factorization for Clustering [50.43424130281065]
We propose a new Semi-Nonnegative Matrix Factorization method for 2-dimensional (2D) data, named TS-NMF.
It overcomes the drawback of existing methods that seriously damage the spatial information of the data by converting 2D data to vectors in a preprocessing step.
arXiv Detail & Related papers (2020-05-19T05:54:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.