Manifold learning in metric spaces
- URL: http://arxiv.org/abs/2503.16187v1
- Date: Thu, 20 Mar 2025 14:37:40 GMT
- Title: Manifold learning in metric spaces
- Authors: Liane Xu, Amit Singer,
- Abstract summary: Laplacian-based methods are popular for dimensionality reduction of data lying in $mathbbRN$.<n>We provide a framework that generalizes the problem of manifold learning to metric spaces and study when a metric satisfies sufficient conditions for the pointwise convergence of the graph Laplacian.
- Score: 4.849550522970841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Laplacian-based methods are popular for dimensionality reduction of data lying in $\mathbb{R}^N$. Several theoretical results for these algorithms depend on the fact that the Euclidean distance approximates the geodesic distance on the underlying submanifold which the data are assumed to lie on. However, for some applications, other metrics, such as the Wasserstein distance, may provide a more appropriate notion of distance than the Euclidean distance. We provide a framework that generalizes the problem of manifold learning to metric spaces and study when a metric satisfies sufficient conditions for the pointwise convergence of the graph Laplacian.
Related papers
- Sample-Efficient Geometry Reconstruction from Euclidean Distances using Non-Convex Optimization [7.114174944371803]
The problem of finding suitable point embedding Euclidean distance information point pairs arises both as a core task and as a sub-machine learning learning problem.
In this paper, we aim to solve this problem given a minimal number of samples.
arXiv Detail & Related papers (2024-10-22T13:02:12Z) - Computing distances and means on manifolds with a metric-constrained Eikonal approach [4.266376725904727]
We introduce the metric-constrained Eikonal solver to obtain continuous, differentiable representations of distance functions.
The differentiable nature of these representations allows for the direct computation of globally length-minimising paths on the manifold.
arXiv Detail & Related papers (2024-04-12T18:26:32Z) - Intrinsic Bayesian Cramér-Rao Bound with an Application to Covariance Matrix Estimation [49.67011673289242]
This paper presents a new performance bound for estimation problems where the parameter to estimate lies in a smooth manifold.
It induces a geometry for the parameter manifold, as well as an intrinsic notion of the estimation error measure.
arXiv Detail & Related papers (2023-11-08T15:17:13Z) - Tight and fast generalization error bound of graph embedding in metric
space [54.279425319381374]
We show that graph embedding in non-Euclidean metric space can outperform that in Euclidean space with much smaller training data than the existing bound has suggested.
Our new upper bound is significantly tighter and faster than the existing one, which can be exponential to $R$ and $O(frac1S)$ at the fastest.
arXiv Detail & Related papers (2023-05-13T17:29:18Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - Identifying latent distances with Finslerian geometry [6.0188611984807245]
Generative models cause the data space and the geodesics to be at best impractical, and at worst impossible to manipulate.
In this work, we propose another metric whose geodesics explicitly minimise the expected length of the pullback metric.
In high dimensions, we prove that both metrics converge to each other at a rate of $Oleft(frac1Dright)$.
arXiv Detail & Related papers (2022-12-20T05:57:27Z) - Neural Bregman Divergences for Distance Learning [60.375385370556145]
We propose a new approach to learning arbitrary Bregman divergences in a differentiable manner via input convex neural networks.
We show that our method more faithfully learns divergences over a set of both new and previously studied tasks.
Our tests further extend to known asymmetric, but non-Bregman tasks, where our method still performs competitively despite misspecification.
arXiv Detail & Related papers (2022-06-09T20:53:15Z) - Manifold Hypothesis in Data Analysis: Double Geometrically-Probabilistic
Approach to Manifold Dimension Estimation [92.81218653234669]
We present new approach to manifold hypothesis checking and underlying manifold dimension estimation.
Our geometrical method is a modification for sparse data of a well-known box-counting algorithm for Minkowski dimension calculation.
Experiments on real datasets show that the suggested approach based on two methods combination is powerful and effective.
arXiv Detail & Related papers (2021-07-08T15:35:54Z) - A Graph-based approach to derive the geodesic distance on Statistical
manifolds: Application to Multimedia Information Retrieval [5.1388648724853825]
We leverage the properties of non-Euclidean Geometry to define the Geodesic distance.
We propose an approximation of the Geodesic distance through a graph-based method.
Our main aim is to compare the graph-based approximation to the state of the art approximations.
arXiv Detail & Related papers (2021-06-26T16:39:54Z) - Intrinsic persistent homology via density-based metric learning [1.0499611180329804]
We prove that the metric space defined by the sample endowed with a computable metric known as sample Fermat distance converges a.s.
The limiting object is the manifold itself endowed with the population Fermat distance, an intrinsic metric that accounts for both the geometry of the manifold and the density that produces the sample.
arXiv Detail & Related papers (2020-12-11T18:54:36Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.