Uniform Interpolation Constrained Geodesic Learning on Data Manifold
- URL: http://arxiv.org/abs/2002.04829v4
- Date: Fri, 14 Aug 2020 05:32:56 GMT
- Title: Uniform Interpolation Constrained Geodesic Learning on Data Manifold
- Authors: Cong Geng, Jia Wang, Li Chen, Wenbo Bao, Chu Chu, Zhiyong Gao
- Abstract summary: Along the learned geodesic, our method can generate high-qualitys between two given data samples.
We provide a theoretical analysis of our model and use image translation as an example to demonstrate the effectiveness of our method.
- Score: 28.509561636926414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a method to learn a minimizing geodesic within a
data manifold. Along the learned geodesic, our method can generate high-quality
interpolations between two given data samples. Specifically, we use an
autoencoder network to map data samples into latent space and perform
interpolation via an interpolation network. We add prior geometric information
to regularize our autoencoder for the convexity of representations so that for
any given interpolation approach, the generated interpolations remain within
the distribution of the data manifold. Before the learning of a geodesic, a
proper Riemannianmetric should be defined. Therefore, we induce a Riemannian
metric by the canonical metric in the Euclidean space which the data manifold
is isometrically immersed in. Based on this defined Riemannian metric, we
introduce a constant speed loss and a minimizing geodesic loss to regularize
the interpolation network to generate uniform interpolation along the learned
geodesic on the manifold. We provide a theoretical analysis of our model and
use image translation as an example to demonstrate the effectiveness of our
method.
Related papers
- Geometry-Aware Generative Autoencoders for Warped Riemannian Metric Learning and Generative Modeling on Data Manifolds [18.156807299614503]
We introduce Geometry-Aware Generative Autoencoder (GAGA), a novel framework that combines manifold learning with generative modeling.
GAGA shows competitive performance in simulated and real-world datasets, including a 30% improvement over the state-of-the-art methods in single-cell population-level trajectory inference.
arXiv Detail & Related papers (2024-10-16T17:53:26Z) - Score-based pullback Riemannian geometry [10.649159213723106]
We propose a framework for data-driven Riemannian geometry that is scalable in both geometry and learning.
We produce high-quality geodesics through the data support and reliably estimates the intrinsic dimension of the data manifold.
Our framework can naturally be used with anisotropic normalizing flows by adopting isometry regularization during training.
arXiv Detail & Related papers (2024-10-02T18:52:12Z) - (Deep) Generative Geodesics [57.635187092922976]
We introduce a newian metric to assess the similarity between any two data points.
Our metric leads to the conceptual definition of generative distances and generative geodesics.
Their approximations are proven to converge to their true values under mild conditions.
arXiv Detail & Related papers (2024-07-15T21:14:02Z) - A Heat Diffusion Perspective on Geodesic Preserving Dimensionality
Reduction [66.21060114843202]
We propose a more general heat kernel based manifold embedding method that we call heat geodesic embeddings.
Results show that our method outperforms existing state of the art in preserving ground truth manifold distances.
We also showcase our method on single cell RNA-sequencing datasets with both continuum and cluster structure.
arXiv Detail & Related papers (2023-05-30T13:58:50Z) - Short and Straight: Geodesics on Differentiable Manifolds [6.85316573653194]
In this work, we first analyse existing methods for computing length-minimising geodesics.
Second, we propose a model-based parameterisation for distance fields and geodesic flows on continuous manifold.
Third, we develop a curvature-based training mechanism, sampling and scaling points in regions of the manifold exhibiting larger values of the Ricci scalar.
arXiv Detail & Related papers (2023-05-24T15:09:41Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Information Entropy Initialized Concrete Autoencoder for Optimal Sensor
Placement and Reconstruction of Geophysical Fields [58.720142291102135]
We propose a new approach to the optimal placement of sensors for reconstructing geophysical fields from sparse measurements.
We demonstrate our method on the two examples: (a) temperature and (b) salinity fields around the Barents Sea and the Svalbard group of islands.
We find out that the obtained optimal sensor locations have clear physical interpretation and correspond to the boundaries between sea currents.
arXiv Detail & Related papers (2022-06-28T12:43:38Z) - Inferring Manifolds From Noisy Data Using Gaussian Processes [17.166283428199634]
Most existing manifold learning algorithms replace the original data with lower dimensional coordinates.
This article proposes a new methodology for addressing these problems, allowing the estimated manifold between fitted data points.
arXiv Detail & Related papers (2021-10-14T15:50:38Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Feature-Based Interpolation and Geodesics in the Latent Spaces of
Generative Models [10.212371817325065]
Interpolating between points is a problem connected simultaneously with finding geodesics and study of generative models.
We provide examples which simultaneously allow us to search for geodesics and interpolating curves in latent space in the case of arbitrary density.
arXiv Detail & Related papers (2019-04-06T13:47:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.