Rehabilitating Isomap: Euclidean Representation of Geodesic Structure
- URL: http://arxiv.org/abs/2006.10858v3
- Date: Thu, 21 Oct 2021 19:09:51 GMT
- Title: Rehabilitating Isomap: Euclidean Representation of Geodesic Structure
- Authors: Michael W. Trosset and Gokcen Buyukbas
- Abstract summary: We argue that Isomap is better understood as constructing Euclidean representations of geodesic structure.
We revisit the rationale for Isomap, clarifying what Isomap does and what it does not.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Manifold learning techniques for nonlinear dimension reduction assume that
high-dimensional feature vectors lie on a low-dimensional manifold, then
attempt to exploit manifold structure to obtain useful low-dimensional
Euclidean representations of the data. Isomap, a seminal manifold learning
technique, is an elegant synthesis of two simple ideas: the approximation of
Riemannian distances with shortest path distances on a graph that localizes
manifold structure, and the approximation of shortest path distances with
Euclidean distances by multidimensional scaling. We revisit the rationale for
Isomap, clarifying what Isomap does and what it does not. In particular, we
explore the widespread perception that Isomap should only be used when the
manifold is parametrized by a convex region of Euclidean space. We argue that
this perception is based on an extremely narrow interpretation of manifold
learning as parametrization recovery, and we submit that Isomap is better
understood as constructing Euclidean representations of geodesic structure. We
reconsider a well-known example that was previously interpreted as evidence of
Isomap's limitations, and we re-examine the original analysis of Isomap's
convergence properties, concluding that convexity is not required for shortest
path distances to converge to Riemannian distances.
Related papers
- Sample-Efficient Geometry Reconstruction from Euclidean Distances using Non-Convex Optimization [7.114174944371803]
The problem of finding suitable point embedding Euclidean distance information point pairs arises both as a core task and as a sub-machine learning learning problem.
In this paper, we aim to solve this problem given a minimal number of samples.
arXiv Detail & Related papers (2024-10-22T13:02:12Z) - A Heat Diffusion Perspective on Geodesic Preserving Dimensionality
Reduction [66.21060114843202]
We propose a more general heat kernel based manifold embedding method that we call heat geodesic embeddings.
Results show that our method outperforms existing state of the art in preserving ground truth manifold distances.
We also showcase our method on single cell RNA-sequencing datasets with both continuum and cluster structure.
arXiv Detail & Related papers (2023-05-30T13:58:50Z) - Short and Straight: Geodesics on Differentiable Manifolds [6.85316573653194]
In this work, we first analyse existing methods for computing length-minimising geodesics.
Second, we propose a model-based parameterisation for distance fields and geodesic flows on continuous manifold.
Third, we develop a curvature-based training mechanism, sampling and scaling points in regions of the manifold exhibiting larger values of the Ricci scalar.
arXiv Detail & Related papers (2023-05-24T15:09:41Z) - Shape And Structure Preserving Differential Privacy [70.08490462870144]
We show how the gradient of the squared distance function offers better control over sensitivity than the Laplace mechanism.
We also show how using the gradient of the squared distance function offers better control over sensitivity than the Laplace mechanism.
arXiv Detail & Related papers (2022-09-21T18:14:38Z) - GraphWalks: Efficient Shape Agnostic Geodesic Shortest Path Estimation [93.60478281489243]
We propose a learnable network to approximate geodesic paths on 3D surfaces.
The proposed method provides efficient approximations of the shortest paths and geodesic distances estimations.
arXiv Detail & Related papers (2022-05-30T16:22:53Z) - A singular Riemannian geometry approach to Deep Neural Networks I.
Theoretical foundations [77.86290991564829]
Deep Neural Networks are widely used for solving complex problems in several scientific areas, such as speech recognition, machine translation, image analysis.
We study a particular sequence of maps between manifold, with the last manifold of the sequence equipped with a Riemannian metric.
We investigate the theoretical properties of the maps of such sequence, eventually we focus on the case of maps between implementing neural networks of practical interest.
arXiv Detail & Related papers (2021-12-17T11:43:30Z) - Non-Parametric Manifold Learning [0.0]
We introduce an estimator for distances in a compact Riemannian manifold based on graph Laplacian estimates of the Laplace-Beltrami operator.
A consequence is a proof of consistency for (untruncated) manifold distances.
The estimator resembles, and in fact its convergence properties are derived from, a special case of the Kontorovic dual reformulation of Wasserstein distance known as Connes' Distance Formula.
arXiv Detail & Related papers (2021-07-16T19:32:34Z) - Improving Metric Dimensionality Reduction with Distributed Topology [68.8204255655161]
DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term.
We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets.
arXiv Detail & Related papers (2021-06-14T17:19:44Z) - Matrix factorisation and the interpretation of geodesic distance [6.445605125467574]
Given a graph or similarity matrix, we consider the problem of recovering a notion of true distance between the nodes.
We show that this can be accomplished in two steps: matrix factorisation, followed by nonlinear dimension reduction.
arXiv Detail & Related papers (2021-06-02T16:11:33Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Ultrahyperbolic Representation Learning [13.828165530602224]
In machine learning, data is usually represented in a (flat) Euclidean space where distances between points are along straight lines.
We propose a representation living on a pseudo-Riemannian manifold of constant nonzero curvature.
We provide the necessary learning tools in this geometry and extend gradient-based optimization techniques.
arXiv Detail & Related papers (2020-07-01T03:49:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.