LDLE: Low Distortion Local Eigenmaps
- URL: http://arxiv.org/abs/2101.11055v1
- Date: Tue, 26 Jan 2021 19:55:05 GMT
- Title: LDLE: Low Distortion Local Eigenmaps
- Authors: Dhruv Kohli, Alexander Cloninger, Gal Mishne
- Abstract summary: We present Low Distortion Local Eigenmaps (LDLE), a manifold learning technique which constructs a set of low distortion local views of a dataset in lower dimension and registers them to obtain a global embedding.
The local views are constructed using the global eigenvectors of the graph Laplacian and are registered using Procrustes analysis.
- Score: 77.02534963571597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Low Distortion Local Eigenmaps (LDLE), a manifold learning
technique which constructs a set of low distortion local views of a dataset in
lower dimension and registers them to obtain a global embedding. The local
views are constructed using the global eigenvectors of the graph Laplacian and
are registered using Procrustes analysis. The choice of these eigenvectors may
vary across the regions. In contrast to existing techniques, LDLE is more
geometric and can embed manifolds without boundary as well as non-orientable
manifolds into their intrinsic dimension.
Related papers
- Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - LDReg: Local Dimensionality Regularized Self-Supervised Learning [31.0201280709395]
Dimensional collapse also known as the "underfilling" phenomenon is one of the major causes of degraded performance on downstream tasks.
Previous work has investigated the dimensional collapse problem of SSL at a global level.
We propose a method called $textitlocal dimensionality regularization (LDReg)
arXiv Detail & Related papers (2024-01-19T03:50:19Z) - Scalable manifold learning by uniform landmark sampling and constrained
locally linear embedding [0.6144680854063939]
We propose a scalable manifold learning (scML) method that can manipulate large-scale and high-dimensional data in an efficient manner.
We empirically validated the effectiveness of scML on synthetic datasets and real-world benchmarks of different types.
scML scales well with increasing data sizes and embedding dimensions, and exhibits promising performance in preserving the global structure.
arXiv Detail & Related papers (2024-01-02T08:43:06Z) - Preserving local densities in low-dimensional embeddings [37.278617643507815]
State-of-the-art methods, such as tSNE and UMAP, excel in unveiling local structures hidden in high-dimensional data.
We show, however, that these methods fail to reconstruct local properties, such as relative differences in densities.
We suggest dtSNE, which approximately conserves local densities.
arXiv Detail & Related papers (2023-01-31T16:11:54Z) - Fiberwise dimensionality reduction of topologically complex data with
vector bundles [0.0]
We propose to model topologically complex datasets using vector bundles.
The base space accounts for the large scale topology, while the fibers account for the local geometry.
This allows one to reduce the dimensionality of the fibers, while preserving the large scale topology.
arXiv Detail & Related papers (2022-06-13T22:53:46Z) - Contrastive Neighborhood Alignment [81.65103777329874]
We present Contrastive Neighborhood Alignment (CNA), a manifold learning approach to maintain the topology of learned features.
The target model aims to mimic the local structure of the source representation space using a contrastive loss.
CNA is illustrated in three scenarios: manifold learning, where the model maintains the local topology of the original data in a dimension-reduced space; model distillation, where a small student model is trained to mimic a larger teacher; and legacy model update, where an older model is replaced by a more powerful one.
arXiv Detail & Related papers (2022-01-06T04:58:31Z) - Low-Rank Subspaces in GANs [101.48350547067628]
This work introduces low-rank subspaces that enable more precise control of GAN generation.
LowRankGAN is able to find the low-dimensional representation of attribute manifold.
Experiments on state-of-the-art GAN models (including StyleGAN2 and BigGAN) trained on various datasets demonstrate the effectiveness of our LowRankGAN.
arXiv Detail & Related papers (2021-06-08T16:16:32Z) - A Local Similarity-Preserving Framework for Nonlinear Dimensionality
Reduction with Neural Networks [56.068488417457935]
We propose a novel local nonlinear approach named Vec2vec for general purpose dimensionality reduction.
To train the neural network, we build the neighborhood similarity graph of a matrix and define the context of data points.
Experiments of data classification and clustering on eight real datasets show that Vec2vec is better than several classical dimensionality reduction methods in the statistical hypothesis test.
arXiv Detail & Related papers (2021-03-10T23:10:47Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Markov-Lipschitz Deep Learning [37.7499958388076]
A prior constraint, called locally smoothness (LIS), is imposed across-layers and encoded into a Markov random field (MRF)-Gibbs distribution.
This leads to the best possible solutions for local geometry preservation and robustness.
Experiments, comparisons, and ablation study demonstrate significant advantages of MLDL for manifold learning and manifold data generation.
arXiv Detail & Related papers (2020-06-15T09:46:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.