Riemannian generative decoder
- URL: http://arxiv.org/abs/2506.19133v1
- Date: Mon, 23 Jun 2025 21:06:13 GMT
- Title: Riemannian generative decoder
- Authors: Andreas Bjerregaard, Søren Hauberg, Anders Krogh,
- Abstract summary: We present a new method for learning representations based on manifold-valued latents.<n>Our method is compatible with existing architectures and yields interpretable latent spaces aligned with data geometry.<n>We validate our approach on three case studies -- a synthetic branching diffusion process, human migrations inferred from mitochondrial DNA, and cells undergoing a cell division cycle.
- Score: 11.074080383657453
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Riemannian representation learning typically relies on approximating densities on chosen manifolds. This involves optimizing difficult objectives, potentially harming models. To completely circumvent this issue, we introduce the Riemannian generative decoder which finds manifold-valued maximum likelihood latents with a Riemannian optimizer while training a decoder network. By discarding the encoder, we vastly simplify the manifold constraint compared to current approaches which often only handle few specific manifolds. We validate our approach on three case studies -- a synthetic branching diffusion process, human migrations inferred from mitochondrial DNA, and cells undergoing a cell division cycle -- each showing that learned representations respect the prescribed geometry and capture intrinsic non-Euclidean structure. Our method requires only a decoder, is compatible with existing architectures, and yields interpretable latent spaces aligned with data geometry.
Related papers
- Riemannian Optimization on Relaxed Indicator Matrix Manifold [83.13494760649874]
The indicator matrix plays an important role in machine learning, but optimizing it is an NP-hard problem.<n>We propose a new relaxation of the indicator matrix and prove that this relaxation forms a manifold, which we call the Relaxed Indicator Matrix Manifold (RIM manifold)<n>We provide several methods of Retraction, including a fast Retraction method to obtain geodesics.
arXiv Detail & Related papers (2025-03-26T12:45:52Z) - Riemann$^2$: Learning Riemannian Submanifolds from Riemannian Data [12.424539896723603]
Latent variable models are powerful tools for learning low-dimensional manifold from high-dimensional data.<n>This paper generalizes previous work and allows us to handle complex tasks in various domains, including robot motion synthesis and analysis of brain connectomes.
arXiv Detail & Related papers (2025-03-07T16:08:53Z) - Decoder ensembling for learned latent geometries [15.484595752241122]
We show how to easily compute geodesics on the associated expected manifold.
We find this simple and reliable, thereby coming one step closer to easy-to-use latent geometries.
arXiv Detail & Related papers (2024-08-14T12:35:41Z) - Motif-aware Riemannian Graph Neural Network with Generative-Contrastive
Learning [23.041843981988503]
We present a novel method for capturing motif regularity in a diverse-curvature manifold without labels.
We also introduce a motif-aware generative-contrastive learning to capture motif regularity in the constructed manifold.
Empirical results show the superiority of MofitRGC over D-GCN.
arXiv Detail & Related papers (2024-01-02T14:58:26Z) - Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Generative Modeling on Manifolds Through Mixture of Riemannian Diffusion Processes [57.396578974401734]
We introduce a principled framework for building a generative diffusion process on general manifold.
Instead of following the denoising approach of previous diffusion models, we construct a diffusion process using a mixture of bridge processes.
We develop a geometric understanding of the mixture process, deriving the drift as a weighted mean of tangent directions to the data points.
arXiv Detail & Related papers (2023-10-11T06:04:40Z) - Manifold Learning by Mixture Models of VAEs for Inverse Problems [1.5749416770494704]
We learn a mixture model of variational autoencoders to represent a manifold of arbitrary topology.
We use it for solving inverse problems by minimizing a data fidelity term restricted to the learned manifold.
We demonstrate the performance of our method for low-dimensional toy examples as well as for deblurring and electrical impedance tomography.
arXiv Detail & Related papers (2023-03-27T14:29:04Z) - Semi-Supervised Manifold Learning with Complexity Decoupled Chart Autoencoders [45.29194877564103]
This work introduces a chart autoencoder with an asymmetric encoding-decoding process that can incorporate additional semi-supervised information such as class labels.
We discuss the approximation power of such networks and derive a bound that essentially depends on the intrinsic dimension of the data manifold rather than the dimension of ambient space.
arXiv Detail & Related papers (2022-08-22T19:58:03Z) - The Dynamics of Riemannian Robbins-Monro Algorithms [101.29301565229265]
We propose a family of Riemannian algorithms generalizing and extending the seminal approximation framework of Robbins and Monro.
Compared to their Euclidean counterparts, Riemannian algorithms are much less understood due to lack of a global linear structure on the manifold.
We provide a general template of almost sure convergence results that mirrors and extends the existing theory for Euclidean Robbins-Monro schemes.
arXiv Detail & Related papers (2022-06-14T12:30:11Z) - Riemannian Score-Based Generative Modeling [56.20669989459281]
We introduce score-based generative models (SGMs) demonstrating remarkable empirical performance.
Current SGMs make the underlying assumption that the data is supported on a Euclidean manifold with flat geometry.
This prevents the use of these models for applications in robotics, geoscience or protein modeling.
arXiv Detail & Related papers (2022-02-06T11:57:39Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.