Decoder ensembling for learned latent geometries
- URL: http://arxiv.org/abs/2408.07507v1
- Date: Wed, 14 Aug 2024 12:35:41 GMT
- Title: Decoder ensembling for learned latent geometries
- Authors: Stas Syrota, Pablo Moreno-Muñoz, Søren Hauberg,
- Abstract summary: We show how to easily compute geodesics on the associated expected manifold.
We find this simple and reliable, thereby coming one step closer to easy-to-use latent geometries.
- Score: 15.484595752241122
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Latent space geometry provides a rigorous and empirically valuable framework for interacting with the latent variables of deep generative models. This approach reinterprets Euclidean latent spaces as Riemannian through a pull-back metric, allowing for a standard differential geometric analysis of the latent space. Unfortunately, data manifolds are generally compact and easily disconnected or filled with holes, suggesting a topological mismatch to the Euclidean latent space. The most established solution to this mismatch is to let uncertainty be a proxy for topology, but in neural network models, this is often realized through crude heuristics that lack principle and generally do not scale to high-dimensional representations. We propose using ensembles of decoders to capture model uncertainty and show how to easily compute geodesics on the associated expected manifold. Empirically, we find this simple and reliable, thereby coming one step closer to easy-to-use latent geometries.
Related papers
- Disentangled Representation Learning with the Gromov-Monge Gap [65.73194652234848]
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning.
We introduce a novel approach to disentangled representation learning based on quadratic optimal transport.
We demonstrate the effectiveness of our approach for quantifying disentanglement across four standard benchmarks.
arXiv Detail & Related papers (2024-07-10T16:51:32Z) - Topological Obstructions and How to Avoid Them [22.45861345237023]
We show that local optima can arise due to singularities or an incorrect degree or winding number.
We propose a new flow-based model that maps data points to multimodal distributions over geometric spaces.
arXiv Detail & Related papers (2023-12-12T18:56:14Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - Exploring Data Geometry for Continual Learning [64.4358878435983]
We study continual learning from a novel perspective by exploring data geometry for the non-stationary stream of data.
Our method dynamically expands the geometry of the underlying space to match growing geometric structures induced by new data.
Experiments show that our method achieves better performance than baseline methods designed in Euclidean space.
arXiv Detail & Related papers (2023-04-08T06:35:25Z) - Topological Singularity Detection at Multiple Scales [11.396560798899413]
Real-world data exhibits distinct non-manifold structures that can lead to erroneous findings.
We develop a framework that quantifies the local intrinsic dimension, and yields a Euclidicity score for assessing the'manifoldness' of a point along multiple scales.
Our approach identifies singularities of complex spaces, while also capturing singular structures and local geometric complexity in image data.
arXiv Detail & Related papers (2022-09-30T20:00:32Z) - Towards Modeling and Resolving Singular Parameter Spaces using
Stratifolds [18.60761407945024]
In learning dynamics, singularities can act as attractors on the learning trajectory and, therefore, negatively influence the convergence speed of models.
We propose a general approach to circumvent the problem arising from singularities by using stratifolds.
We empirically show that using (natural) gradient descent on the smooth manifold approximation instead of the singular space allows us to avoid the attractor behavior and therefore improve the convergence speed in learning.
arXiv Detail & Related papers (2021-12-07T14:42:45Z) - Pulling back information geometry [3.0273878903284266]
We show that we can achieve meaningful latent geometries for a wide range of decoder distributions.
We show that we can achieve meaningful latent geometries for a wide range of decoder distributions.
arXiv Detail & Related papers (2021-06-09T20:16:28Z) - A Unifying and Canonical Description of Measure-Preserving Diffusions [60.59592461429012]
A complete recipe of measure-preserving diffusions in Euclidean space was recently derived unifying several MCMC algorithms into a single framework.
We develop a geometric theory that improves and generalises this construction to any manifold.
arXiv Detail & Related papers (2021-05-06T17:36:55Z) - Quadric hypersurface intersection for manifold learning in feature space [52.83976795260532]
manifold learning technique suitable for moderately high dimension and large datasets.
The technique is learned from the training data in the form of an intersection of quadric hypersurfaces.
At test time, this manifold can be used to introduce an outlier score for arbitrary new points.
arXiv Detail & Related papers (2021-02-11T18:52:08Z) - Geometry-Aware Hamiltonian Variational Auto-Encoder [0.0]
Variational auto-encoders (VAEs) have proven to be a well suited tool for performing dimensionality reduction by extracting latent variables lying in a potentially much smaller dimensional space than the data.
However, such generative models may perform poorly when trained on small data sets which are abundant in many real-life fields such as medicine.
We argue that such latent space modelling provides useful information about its underlying structure leading to far more meaningfuls, more realistic data-generation and more reliable clustering.
arXiv Detail & Related papers (2020-10-22T08:26:46Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.